| { | |
| "title": "A Recent Survey of Heterogeneous Transfer Learning", | |
| "abstract": "The application of transfer learning, leveraging knowledge from source domains to enhance model performance in a target domain, has significantly grown, supporting diverse real-world applications. Its success often relies on shared knowledge between domains, typically required in these methodologies. Commonly, methods assume identical feature and label spaces in both domains, known as homogeneous transfer learning. However, this is often impractical as source and target domains usually differ in these spaces, making precise data matching challenging and costly. Consequently, heterogeneous transfer learning (HTL), which addresses these disparities, has become a vital strategy in various tasks.\nIn this paper, we offer an extensive review of over 60 HTL methods, covering both data-based and model-based approaches. We describe the key assumptions and algorithms of these methods and systematically categorize them into instance-based, feature representation-based, parameter regularization, and parameter tuning techniques. Additionally, we explore applications in natural language processing, computer vision, multimodal learning, and biomedicine, aiming to deepen understanding and stimulate further research in these areas. Our paper includes recent advancements in HTL, such as the introduction of transformer-based models and multimodal learning techniques, ensuring the review captures the latest developments in the field. We identify key limitations in current HTL studies and offer systematic guidance for future research, highlighting areas needing further exploration and suggesting potential directions for advancing the field.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "Introduction", | |
| "text": "In recent decades, the field of machine learning has experienced remarkable achievements across diverse domains of application. Notably, the substantial progress made in machine learning can be attributed to the extensive utilization of abundant labeled datasets in the era of big data. Nonetheless, the acquisition of labeled data can present challenges in terms of cost or feasibility within certain practical scenarios. To address this issue, transfer learning [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###] has emerged as a promising technique for enhancing model performance in a target domain by leveraging knowledge transfer from one or more source domains. The source domain typically offers a more accessible or economical means of obtaining labeled data. This notion exhibits conceptual similarities to the transfer learning paradigm observed in psychological literature, where the aim is to generalize experiences from prior activities to new ones. For instance, the knowledge (e.g., pitch relationships, harmonic progressions, and musical structures) acquired from playing violins can be applied to the task of playing pianos, serving as a practical illustration of transfer learning. The effectiveness of transfer learning crucially hinges on the relevance between the new task and past tasks.\nTypically, transfer learning is divided into two main categories: homogeneous transfer learning and heterogeneous transfer learning (HTL). The former pertains to scenarios where the source and target domains have matching feature and label spaces. However, real-world applications frequently involve disparate feature spaces and, occasionally, dissimilar label spaces between the source and target domains. Unfortunately, in these scenarios, collecting source domain data that seamlessly aligns with the target domain\u2019s feature space can prove infeasible or prohibitively expensive. Moreover, as new data and domains emerge, HTL facilitates models to continuously adapt and remain up-to-date without beginning from scratch. Consequently, researchers have directed significant attention towards investigating HTL techniques, which have shown promise across various tasks [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nPrevious literature reviews have predominantly focused on homogeneous transfer learning approaches.\nSeveral surveys [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 10 ###reference_b10###, 11 ###reference_b11###] have systematically categorized and assessed a wide spectrum of transfer learning techniques, taking into account various aspects such as algorithmic categories and application scenarios.\nAn emerging trend is conducting literature reviews on technologies that combine transfer learning with other machine learning techniques, such as deep learning [12 ###reference_b12###, 13 ###reference_b13###], reinforcement learning [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###], and federated learning [17 ###reference_b17###, 18 ###reference_b18###].\nBeyond algorithm-centric surveys, certain reviews have concentrated specifically on applications in computer vision (CV) [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###], natural language processing (NLP) [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###], medical image analysis [26 ###reference_b26###, 27 ###reference_b27###], and wireless communication [28 ###reference_b28###, 29 ###reference_b29###].\nWhile there exist three surveys [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###] on HTL, the first two surveys primarily cover approaches proposed before 2017.\nThe third survey [32 ###reference_b32###] is a recent one, but focused only on features-based algorithms, a subset of the HTL methods.\nAll of them fail to incorporate the latest advancements in this area, especially the advert of transformer [33 ###reference_b33###] and its descendants, such as\nBidirectional Encoder Representations from Transformers (BERT) [34 ###reference_b34###] and Generative Pre-trained Transformer (GPT) [35 ###reference_b35###]. Since 2017, the field of HTL has continued to flourish with ongoing research. Specifically, large-scale foundation models are publicly available, exhibiting significant potential to provide a robust and task-agnostic starting point for transfer learning applications. Leveraging HTL not only enhances model performance on target tasks by initiating with pre-existing knowledge but also significantly reduces training time and resource usage through fine-tuning of pre-trained models. Furthermore, another notable advancement is the embrace of multi-modality, where knowledge from different domains is combined to enhance learning outcomes [36 ###reference_b36###, 37 ###reference_b37###]. Multimodal learning has shown tremendous promise in handling data from diverse modalities like images, text, and audio, which is pivotal in tasks such as image captioning, visual question answering, and cross-modal retrieval. In summary, HTL is of paramount importance as it substantially enhances the performance, adaptability, and efficiency of machine learning models across an extensive range of applications. Since there has been a notable absence of subsequent summarization efforts to capture the advancements in this area, to fill the gap, we present an exhaustive review of the state-of-the-art in HTL, with a focus on recent breakthroughs.\nContributions. This survey significantly contributes to the field of HTL by providing an extensive overview of methodologies and applications111The papers reviewed in the survey, along with associated resources including code and datasets, can be accessed at https://github.com/ymsun99/Heterogeneous-Transfer-Learning ###reference_ransfer-Learning###., and offering detailed insights to guide future research. The key contributions are:\nThis paper provides an extensive review of more than 60 HTL methods, detailing their underlying assumptions, and key algorithms. It systematically categorizes these methods into data-based and model-based approaches, offering insights into different HTL strategies, including instance-based, feature representation-based, parameter regularization, and parameter tuning.\nThe survey includes recent advancements in HTL, such as the introduction of transformer-based models and multimodal learning techniques, ensuring the review captures the latest developments in the field.\nThe survey identifies key limitations in current HTL studies and offers systematic guidance for future research. It highlights areas needing further exploration and suggests potential directions for advancing the field.\nOrganization.\nWe organize the rest of the paper as follows. Firstly, we introduce notations and problem definitions in Section 2 ###reference_###. Secondly, we provide an overview of data-based HTL methods in Section 3 ###reference_###, including instance-based and feature representation-based approaches. Thirdly, we discuss model-based methods in Section 4 ###reference_###. Lastly, we delve into methods in application scenarios in Section 5 ###reference_###. Finally, we present the concluding remarks of the paper.\n\n###figure_1###" | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "Preliminary", | |
| "text": "" | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "Notations and Problem Definitions", | |
| "text": "Notations. To simplify understanding, we provide a summary of notations in the following list.\n{ldescription}\nSource Domain.\nTarget Domain.\nFeature size of the source domain.\nFeature size of the target domain.\nInstance size of the source domain.\nInstance size of the target domain.\nFeature space of the source domain.\nFeature vector of one instance in the source domain.\nData matrix of all instances in the source domain.\nLabel space of the source domain.\nLabels of all instances in the source domain.\nFeature space of the target domain.\nFeature vector of one instance in the target domain.\nData matrix of all instances in the target domain.\nLabels of all instances in the target domain.\nLabel space of the target domain.\nRegularization function.\nObjective function.\nProblem Definitions.\nIn this survey, a domain comprises a feature space and a marginal probability distribution where . For a given specific domain , a task consists a label space and an objective predictive function . Source domain data is denoted as where and , and similarly, target domain data is denoted as \n where and . In most cases, .\nGiven source domain data and task , and target domain and task ,\ntransfer learning, in this context, involves leveraging the knowledge from and to enhance the learning of the objective predictive function in , where or . Specifically, the condition indicates differences in either the feature spaces, , or marginal distributions, . Similarly, the condition implies disparities in either the label spaces or the objective functions . These differences distinguish between homogeneous and heterogeneous transfer learning. In homogeneous transfer learning, feature spaces and label spaces are identical, while marginal distributions and objective functions can differ. Conversely, heterogeneous transfer learning, which is the primary focus of this survey, pertains to scenarios where either or .\nFurthermore, within the realm of transfer learning, domain adaptation [21 ###reference_b21###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###] is a subset characterized by and . However, it is important to note that the terms \u201cdomain adaptation\u201d and \u201ctransfer learning\u201d are often used interchangeably in the literature." | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "Learning Scenarios", | |
| "text": "In HTL, the choice of methods is heavily influenced by the availability of labeled data in source and target domains. This section delves into three primary scenarios, each defined by the presence or absence of labeled data: (1) both source and target domains possess labeled data, though the target domain is likely to exhibit significant label scarcity; (2) only source domain has labels; and (3) an entirely unsupervised setting, where both domains do not have labels.\nThese scenarios each bring forth distinct challenges and objectives, demanding specialized approaches to efficiently harnessing available information and enabling knowledge transfer.\nIn this scenario, both the source and target domains possess labeled data. However, the target domain often lacks sufficient labeled data, which is a significant challenge. To address this, the methods in this category often use semi-supervised settings [41 ###reference_b41###] for the target domain. These settings comprise a limited amount of labeled data complemented by a substantial volume of unlabeled target data.\nThe goal is to use the labeled data from both domains, along with the unlabeled target data, to improve learning in the target domain.\nIn this specific scenario, labeled information is available exclusively from the source domain, leaving the target domain without labeled data. The challenge here involves utilizing the labeled source data effectively to make accurate predictions for the instances in the target domain.\nUnsupervised transfer learning addresses scenarios where instances in both the source and target domains are unlabeled. The primary objective in this context is to harness meaningful and transferable knowledge from a source domain to enhance learning in a target domain, notwithstanding the lack of labeled data." | |
| }, | |
| { | |
| "section_id": "2.3", | |
| "parent_section_id": "2", | |
| "section_name": "Data-based vs. Model-based", | |
| "text": "The methodologies outlined in our survey can be broadly divided into two major categories: data-based methods, as covered in Section 3 ###reference_###, and model-based methods, elaborated upon in Section 4 ###reference_###. Figure 1 ###reference_### and Table 1 ###reference_### provides an overview.\nData-based methods involve the transfer of either the original data or their transformed features to a target domain, allowing the target model to be trained with this augmented data, thereby enriching the available data within the target domain. Conversely, model-based methods center around constructing models and learning their parameters exclusively within the source domain. By adapting both the model structure and parameters of a source model, the target models inherit the underlying insights from the prior knowledge in the source domain, consequently leading to enhanced performances.\nDelving deeper, the data-based section distinguishes between instance-based methods in Section 3.1 ###reference_### and feature representation-based ones in Section 3.2 ###reference_###. Instance-based methods utilize intermediate data\nthat relates to both source and target domains, effectively serving as a bridge between them. In contrast, feature representation-based methods employ techniques such as feature mapping or feature augmentation to align the features of both domains, transforming them into a shared space without involving additional data.\nIn the model-based part, methods are also further classified into parameter-regularization in Section 4.1 ###reference_### and parameter-tuning methods in Section 4.2 ###reference_###. In the former category, the objective function integrates regularization techniques to control parameter differences between both models. Target models in this category begin with random initialization and are trained on target tasks. During training, they are constrained to ensure that their parameters do not significantly diverge from those of the source models.\nConversely, the latter category involves initializing target models using parameters from source models and subsequently refining them through fine-tuning on specific target tasks.\n###table_1###" | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "Data-based Method", | |
| "text": "In transfer learning, data-based methods seek to integrate additional data instances that are not solely restricted to the target domain. These methods encompass instances from source domains and, where applicable, intermediate domains, as is especially pertinent in instance-based approaches. In HTL, the core strategy of these methods involves aligning the feature spaces that originate from both the source and target domains. This alignment fosters the creation of a unified, common space conducive to the integration of augmented information from all respective domains. By doing so, data-based methods significantly enrich the learning process, offering a substantial potential to boost models\u2019 adaptability and performance in varied scenarios." | |
| }, | |
| { | |
| "section_id": "3.1", | |
| "parent_section_id": "3", | |
| "section_name": "Instance-based Method", | |
| "text": "To establish a connection between heterogeneous source and target domains, it is intuitive to incorporate additional information to explore the latent relationships between these two feature spaces and . Methods capitalizing on such supplementary information are classified under instance-based approaches. The supplementary information is termed as intermediate data . Intermediate data, as shown in Figure 2 ###reference_###, act as a bridge between the unrelated or weakly related source and target domains. The intermediate data shares relevance or characteristics with both source and target domains, thereby facilitating the discovery of underlying patterns and relationships between them.\n\n###figure_2### Instance-based methods draw inspiration from Multi-View Learning, where data instances are represented by multiple distinct feature representations or \u201cviews\u201d. Each view captures different facets or perspectives of the data, thereby providing a multifaceted understanding of the instances. In the context of intermediate data, one view may share homogeneous features with the source domain data, while another view shares the same characteristics with the target domain data. For example, in scenarios involving disparate data types like text and images, images with text annotations can serve as the intermediate data. With instance-based methods, the essence of knowledge transfer lies in the propagation of information from the source domain , channeled through the intermediate domain , ultimately reaching and enriching the target domain as shown in,\nWe delve deeper into the exploration of intermediate data utilization through the following illustrative examples.\nTransitive Transfer Learning (TTL) [42 ###reference_b42###] introduces intermediate domain data . This intermediate data is strategically designed to share distinct common factors with both the source domain and target domain . TTL employs non-negative matrix tri-factorization (NMTF) on , and , which is formulated as . This approach is applied concurrently across the three domains. In this formulation, represents the data matrix. Given The variables and represent the number of feature clusters and instance clusters, , , and correspond to feature clusters, instance clusters, and the associations between feature clusters and instance clusters respectively. TTL\u2019s core mechanism involves feature clustering through NMTF, resulting in two interrelated feature representations. Knowledge transfer occurs by propagating label information from the source domain to the target domain. This process uses two pairs of coupled feature representations: one links the source and intermediate domains, and the other connects the target and intermediate domains.\nIn some cases, directly obtaining corresponding pairs between target and source domains can be challenging. Instead of relying on such pairs, the Heterogeneous Transfer Learning for Image Classification (HTLIC) method [43 ###reference_b43###] enriches the representation of target images with semantic concepts extracted from auxiliary source documents. HTLIC incorporates intermediate data, which are auxiliary images that have been annotated with text tags sourced from the social Web, effectively establishing a bridge between image (the target domain) and text (the source domain).\nHTLIC employs two matrices, specifically denoted as and , which capture correlations between images and tags, as well as text and tags, respectively. Unlike traditional class labels, these tags encapsulate semantic representations that describe specific attributes or characteristics of data instances. Through the application of matrix bi-factorization techniques and the minimization of the objective function,\nwhere , , and represent the latent representations for target image instances, intermediate tags, and source document instances respectively, HTLIC learns the latent representation . Following that, HTLIC incorporates the obtained latent representations into the target instances, resulting in the generation of transformed features .\nInspired by the success of deep neural networks (DNNs) in transfer learning, in [44 ###reference_b44###], a Deep semantic mapping model for Heterogeneous multimedia Transfer Learning (DHTL) method utilizes a specialized form of intermediate data called co-occurrence data. This method utilizes a specialized form of intermediate data known as co-occurrence data, which consists of instance pairs\u2014one from the source domain and one from the target domain, such as text-to-image pairs and multilingual text pairs. DHTL is proposed to integrate auto-encoders with multiple layers to jointly learn the domain-specific networks and the shared inter-domain representation using co-occurrence data.\nTo facilitate the alignment of semantic mappings between the source and target domains, DHTL incorporates Canonical Correlation Analysis [78 ###reference_b78###] to enable the matching of semantic representations of co-occurrence data pairs layer by layer. Consequently, the method learns a common semantic subspace that allows the utilization of labeled source features for model development in a target domain.\nPrevious instance-based methods focus on offline or batch learning problems, which assume that all training instances from the target domain are available in advance. However, this assumption may not hold true in many real-world applications. Several online HTL methods are capable of addressing scenarios where the target data sequence is acquired incrementally in real-time, while the offline source instances are available at the start of the training process. Since the labeled target instances are often extremely limited at the start of training, it is particularly important to transfer knowledge from source domains in these scenarios. We introduce two online instance-based methods here.\nOnline Heterogeneous Knowledge Transition (OHKT) [45 ###reference_b45###]\nbridges the target (image) and source (text) domains by generating pseudo labels for co-occurrence data, which consist of text-image pairs. The approach involves training a classifier on the labeled source data and using it to assign pseudo labels to the co-occurrence data. These pseudo-labels are subsequently utilized to assist the online learning task in the target domain, facilitating the transfer of knowledge from the source domain.\nDirectly using co-occurrence data can be simplistic and may not capture the underlying nuances of similarity. Addressing this, Online Heterogeneous Transfer learning by Hedge Ensemble (OHTHE) [46 ###reference_b46###] introduces a measure of heterogeneous similarity between target and source instances using co-occurrence data. Specifically, OHTHE derives the similarity between target instance and source instance by incorporating co-occurrence pairs into the similarity computation. The formulated heterogeneous similarity is given by:\nwhere and denote the similarity measures in the source and target domains, respectively. Notably, the Pearson correlation is employed as the similarity metric for both domains, ensuring consistency in the similarity evaluation. This similarity measure is then employed to guide the classification of unlabeled target instances by incorporating information from source labels. OHTHE achieves this by learning an offline decision function for the target instances, accomplished through aligning the source label information for target instances using the similarity measure. Simultaneously, OHTHE utilizes target data to directly construct an online updated classifier . The final ensemble classifier is formed by combining and through a convex combination, and the method employs a hedge weighting strategy [79 ###reference_b79###] to update the parameters in an online manner.\nIn summary, this section has explored both offline [42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###] and online [45 ###reference_b45###, 46 ###reference_b46###] instance-based methods. These methods are characterized by the use of an intermediate domain, with some [44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###] employing a specific type of intermediate domain known as co-occurrence data. While some instance-based methods utilize traditional techniques such as matrix factorization [42 ###reference_b42###, 43 ###reference_b43###], others incorporate deep neural networks [44 ###reference_b44###].\nWhile instance-based methods are typically intuitive and effective for connecting heterogeneous source and target domains by leveraging supplementary data to discover underlying relationships, there are scenarios where obtaining an adequate amount of supplementary data is challenging. In such cases, instance-based methods may inadvertently lead to what is known as \u2018over-adaptation\u2019. Over-adaptation occurs when weakly correlated features, which lack semantic counterparts in the other domain, are compelled into a common feature space within the latent domain. This phenomenon can hinder the performance of transfer learning [80 ###reference_b80###]. Furthermore, there are situations in which acquiring intermediate data is not feasible due to various constraints. In such cases, it becomes imperative to explore alternative strategies that do not rely on the availability of intermediate domain data, such as feature representation-based methods." | |
| }, | |
| { | |
| "section_id": "3.2", | |
| "parent_section_id": "3", | |
| "section_name": "Feature Representation-based Method", | |
| "text": "In HTL, feature representation-based approaches hold a paramount position. These methods tackle the heterogeneity between the source feature space and the target feature space by aligning the heterogeneous spaces into a cohesive unified space, denoted as . This alignment is realized by learning two projection functions, as illustrated in\nwhere and are the projection functions in the source and target domain, respectively.\nIn this unified space , the diverse features from the original heterogeneous spaces can be effectively compared and shared, paving the way for enhanced learning across different domains.\nThe primary goal of the feature representation-based method is to reduce the disparity between the source and target domains, with the evaluation of the similarity of their distributions being a critical initial step in this process.\nIn this context, the Maximum Mean Discrepancy (MMD) [81 ###reference_b81###] is employed as a measure of distribution similarity. MMD assesses the distances between the means of distributions in a Reproducing Kernel Hilbert Space (RKHS) according to the following formula:\nMinimizing the MMD value implies a reduction in distribution disparity between the source and target domains, indicating that the features in both domains are becoming more similarly distributed. Achieving a minimized MMD value is pivotal as it signifies a successful alignment of feature distributions across two domains, which is a fundamental step toward mitigating the discrepancy between them. In addition to the MMD metric, there are other measures such as Soft-MMD [54 ###reference_b54###] and the -distance [82 ###reference_b82###]. However, these are not as commonly utilized as the MMD metric.\nFeature representation-based methods are mainly divided into two fundamental operations: feature mapping and feature augmentation.\nThe feature mapping operation involves projecting source and target features into a shared representation space. This mapping aims to align the feature distributions of two domains and mitigate the underlying heterogeneity, thus facilitating the seamless transfer of knowledge between them. On the other hand, feature augmentation methods incorporate both domain-invariant features and the original domain-specific features from each domain. This approach not only considers a common subspace for comparing heterogeneous data but also keeps the domain-specific patterns, leading to more comprehensive and effective feature representations.\nThe Cross-Domain Landmark Selection (CDLS) method [6 ###reference_b6###] establishes a common homogeneous space by projecting the target data into a subspace using PCA. To bring the source-domain data into this subspace, CDLS utilizes a feature transformation matrix denoted as which helps to eliminate domain difference. By learning , the technique aims to match the marginal distributions and , while also aligning the conditional distributions and .\nUtilizing information from label distributions, Supervised Heterogeneous Domain Adaptation via Random Forests (SHDA-RF) [7 ###reference_b7###] derives the pivots that serve as corresponding pairs, bridging the gap between the heterogeneous source and target domains.\nThe SHDA-RF process begins by identifying pivots from both the source and target random forest models, which share the same label distributions. These pivots act as connections between the heterogeneous feature spaces. Utilizing the derived pivots, the method estimates feature contribution matrices and . Subsequently, a projection matrix is learned from these matrices, enabling the mapping of source features to target features .\nInstead of relying on instance correspondences, Sparse Heterogeneous Feature Representation (SHFR) method [49 ###reference_b49###] learns the feature mapping function based on weight vectors and , assuming linear classifiers. By maximizing or minimizing , SHFR learns a mapping function that can align source and target domains effectively.\nWhile asymmetric feature mapping offers flexibility and ease of implementation with only one projection to learn [48 ###reference_b48###], symmetric feature mapping is more commonly employed due to its versatility in HTL. Symmetric feature mapping involves the transformation of both feature domains into a shared latent feature space. Specifically, symmetric feature mapping transforms the heterogeneous features into one shared space. The mapping transformation could be as simple as\nwhere and are projection matrices that map the source and target features into a common space .\nBy learning a common representation, symmetric feature mapping facilitates better alignment of feature distributions and enhances the generalization capability of the model by capturing underlying structures that are relevant to both domains.\nIn the following paragraphs, we will explore various approaches and algorithms that utilize symmetric feature mapping to address HTL challenges.\nHeterogeneous Spectral Mapping (HeMap) [8 ###reference_b8###] learns two linear transformation matrices using spectral embedding in the following optimization objective,\nwhere and are projected source and target data.\nThe primary objective of this optimization is to enable the projection to enhance data similarity while preserving inherent structural characteristics. Preserving structural information is of paramount importance, particularly for accurate data classification [85 ###reference_b85###].\nThe Domain Adaptation by Covariance Matching (DACoM) [50 ###reference_b50###] introduces transformations that incorporate the zero-mean characteristics into the mapped features. Specifically, it performs the following transformations to automatically make the two first moments equal:\nwhere and denote the means of and , respectively. By doing so, the first moments are automatically equal and DACoM minimizes the gap of their covariance matrices in both domains to learn more consistent distributions of the projected instances.\nGiven multiple heterogeneous source domains, Heterogeneous Domain Adaptation using Manifold Alignment (DAMA) [9 ###reference_b9###] considers each domain as a manifold, represented by a Laplacian matrix constructed from an affinity graph that captures relationships among instances. DAMA aims to reduce the dimensionality of feature space while preserving manifold topology through generalized eigenvalue decomposition. This process generates a lower dimensional feature space that can be utilized for transfer learning across domains. However, DAMA assumes that the data follows a manifold structure.\nWhile geometric manifold structures are pivotal as discussed in previous methods, other latent factors also play a crucial role in establishing a connection between the source and target domains. Factors such as landmark instances, which are a select subset of labeled source instances closely distributed to the target domain, are of particular importance. Locality Preserving Joint Transfer (LPJT) method [51 ###reference_b51###] proposes a unified objective to optimize all aspects at the same time.\nThe transformation matrices are learned by minimizing the marginal and conditional MMD between the common space of source and target domains, reducing domain shifts while preserving local manifold structures through the minimization of intra-class instance distance and the maximization of inter-class instance distance. By doing so, the LPJT method establishes a connection between heterogeneous source and target domains. Additionally, the LPJT method incorporates a re-weighting strategy for landmark selection, which aids in selecting pivot instances as bridges for effective knowledge transfer.\nThe Information Capturing and Distribution Matching (ICDM) method [52 ###reference_b52###] introduces a similar approach to LPJT by utilizing MMD for aligning domain distributions but extends its scope beyond distribution matching. ICDM places emphasis on preserving original feature information through the minimization of reconstruction loss between original and reconstructed data. ICDM can capture and maintain the essential characteristics of original features during the domain adaptation process.\nIn HTL, a recurrent challenge is the scarcity of label information within the target domain. This sparsity underscores the paramount importance of effectively harnessing whatever limited labels are available in the target setting [86 ###reference_b86###]. In response to this challenge, several methods have been formulated. Some methods use label information to enforce the similarity of projected data points in the same class across different domains. Others incorporate a supervised classification loss to the objective function.\nCross-Domain Structure Preserving Projection (CDSPP) algorithm [53 ###reference_b53###] incorporates a symmetric feature mapping approach to enforce the proximity of the projected instances belonging to the same class, regardless of their original domains, using the similarity matrix of the training instances derived from the label information.\nSoft Transfer Network (STN) [54 ###reference_b54###] simultaneously learns a domain-shared subspace and a classifier .\nThe STN constructs two projection networks that are dedicated to mapping the data from both the source and target domains, and , into and respectively, within a common domain-invariant subspace. The optimization process involves minimizing the classification loss calculated over source instances and labeled target instances, together represented as and their corresponding labels . Additionally, a Soft Maximum Mean Discrepancy (Soft-MMD) loss is employed to align both the marginal and conditional distributions between the domains.\nThe objective function of STN includes a classification loss and Soft-MMD loss together as:\nThe Soft-MMD is an extension of the MMD concept. The MMD mainly focuses on the divergence in marginal distributions. The Soft-MMD further accounts for the discrepancies in conditional distributions across different domains. The Soft-MMD is defined as,\nand\nHere, denotes the adaptive coefficient with as the total number of iterations and indicating the current iteration. To address the scarcity of labeled target instances, Soft-MMD leverages the unlabeled target data and assigns -dimensional soft labels , which represent the probabilities of the projected data belonging to categories. This approach also introduces an adaptive coefficient that gradually increases the weight of the predicted labels.\nSemantic Correlation Transfer (SCT) [55 ###reference_b55###] aims to transfer knowledge of semantic correlations among categories from the source domain to the target domain. The method measures semantic correlations by cosine similarity between different local centroids in the source domain. To achieve this, SCT uses two projection functions to map source and target features into a shared space. The optimization process involves minimizing a loss function that encompasses several components: the discrepancy in marginal distribution, the discrepancy in conditional distribution, the discrepancy in cosine distances among classes across both domains and the supervised classification loss. Through this approach, SCT not only encourages the learning of domain-invariant features that reduce the mixing of features from different classes but also enhances the discriminative ability of categories within the target domain.\nMany HTL methods focus on addressing either feature discrepancy or distribution divergence one at a time. However, optimizing one can enhance the other. Subsequently, some methods further optimize both of them simultaneously.\nHeterogeneous Domain Adaptation Through \nProgressive Alignment (HDAPA) [56 ###reference_b56###] simultaneously optimizes feature difference and distribution divergence. This method maps the domain features into new representations in a shared latent space, using two domain-specific projections and a common codebook . It uses the MMD metric (5 ###reference_###) to measure distribution divergence. By solving the variables alternatively using the objective function illustrated as follows,\nThe algorithm progressively learns the new representations for source and target domains.\nSimilarly, Heterogeneous\nAdversarial Neural Domain\nAdaptation (HANDA) [57 ###reference_b57###] conducts both feature and distribution alignment within a unified neural network architecture. The method achieves this by using a shared dictionary learning approach to project heterogeneous features into a common latent space, thereby handling heterogeneity while alleviating feature discrepancy. An adversarial kernel matching method is then employed to reduce distribution divergence. Finally, a shared classifier is used to minimize the shared classification loss.\nNevertheless, lower-order statistics do not always fully characterize the heterogeneity of the domains [87 ###reference_b87###]. Some methods employ neural network based structures to map the heterogeneous feature domains to one shared representation space.\nTransfer Neural Trees (TNT) method [60 ###reference_b60###, 88 ###reference_b88###] jointly solves cross-domain feature mapping, adaptation, and classification in a neural network based architecture. TNT learns the source and target feature mapping and respectively and updates them to minimize the prediction error for the labeled source data and target domain data . Due to the lack of label information for the unlabeled target-domain data , the method preserves the prediction and structural consistency between and to learn with .\nIn this subsection, we discussed feature mapping methods, which present sophisticated approaches to bridge the gap between source and target domains in HTL by projecting them into a shared, domain-invariant subspace. The feature mapping methods discussed can be categorized into asymmetric [47 ###reference_b47###, 48 ###reference_b48###, 6 ###reference_b6###, 7 ###reference_b7###, 49 ###reference_b49###] and symmetric transformations, with symmetric transformation being the predominant type. These methods aim to align the source and target domains by considering various factors that include domain distribution [50 ###reference_b50###], manifold structure [9 ###reference_b9###], and landmark selection [51 ###reference_b51###]. Given that target label information is often limited, some methods [53 ###reference_b53###, 54 ###reference_b54###, 55 ###reference_b55###] effectively utilize it by employing classification loss or enforcing similarity among instances within the same category. Regarding projection methods, approaches vary from using basic matrices [8 ###reference_b8###] and dictionary learning [56 ###reference_b56###, 57 ###reference_b57###] to employing neural networks [60 ###reference_b60###].\nIn the Semi-supervised Heterogeneous Feature \nAugmentation (SHFA) method [61 ###reference_b61###] [62 ###reference_b62###], source features and target features are augmented as,\nwhere and are two projection matrices that map the source and target features into a shared common space ; and are zero vectors. By performing this feature augmentation, the heterogeneous source and target domains are effectively connected in a -dimensional common space, enabling the transfer of knowledge and information between the two domains.\nThe alternative strategy discards the concept of common feature space. Instead, it initializes the source and target features as,\nwhich reduces the dimensionality from to . This reduction can yield advantages in computational efficiency.\nDiscriminative Correlation Analysis (DCA) [63 ###reference_b63###] and Knowledge Preserving and Distribution Alignment (KPDA) [64 ###reference_b64###] augment the target features as where is a learnable matrix, which can avoid the problem of the curse of dimensionality in SHFA.\nEquipped with deep learning techniques, [65 ###reference_b65###] proposes Symmetric Generative Adversarial Networks (Sym-GANs) algorithm. This algorithm trains one Generative Adversarial Network (GAN) to map the source features to target features and another GAN for reverse mapping. Using labeled source domain data and target domain data , the Sym-GANs algorithm learns bidirectional mappings denoted by and . With these mappings, augmented features can be obtained:\nThese newly generated representations are then used for training a classifier of target instances for enhanced discriminative capability.\nSome methods assume that instances from both the source and target domains share identical feature spaces. As a result, they construct a unified instance-feature matrix that includes all instances across both domains. By addressing the matrix completion challenge and subsequently reconstructing the \u201cground-truth\u201d feature-instance matrix, they obtain enhanced features within this common space.\nGiven a set of labeled instances from source domain, unlabeled instances from source domain, unlabeled instances from target domain, and corresponding pairs between the source and target domains, Heterogeneous Transfer Learning through Active correspondences construction (HTLA) method [66 ###reference_b66###] first builds a unified instance-feature matrix for all the instances. To address missing data, zero-padding is employed, leading to the matrix , defined as,\nSubsequently, the missing entries within undergo a recovery procedure accomplished through a matrix completion mechanism that is based on distribution matching, particularly utilizing the MMD. The final result is the fully recovered and completed matrix . A singular value decomposition is then applied to ,\nresulting in the projection of domain data into a shared latent space defined by the top singular vectors, expressed as . This projection yields the transformed feature matrix . HTLA trains a classifier on the new feature representations of the source domain labeled data, comprising the first rows of , and applies it to predict on the target domain data, encompassing the last rows of .\nCorresponding pairs employed in HTLA can be missing in some situations and Multiple Kernel Learning (MKL) [67 ###reference_b67###] are proposed to address this problem. Given labeled source domain data and target domain data , including a few labeled instances and unlabeled ones, MKL augments the data using zero padding as follows,\nThe approach introduces two latent factor matrices: , which serves as the latent representations for matrix , and , which acts as the dictionary for matrix completion. This framework facilitates the matrix completion process, leading to the acquisition of a latent feature representation, denoted as .\nDifferent from previous methods that rely on conventional matrix completion techniques, Deep Matrix Completion with Adversarial Kernel Embedding (Deep-MCA) [68 ###reference_b68###] proposes a deep learning based framework. This approach employs an auto-encoder architecture denoted as to perform matrix completion on the augmented matrix defined in Eq. (17 ###reference_###) above, where represents decoder and represents encoder. By applying the encoder to the augmented features and , mapping them into a Reproducing Kernel Hilbert Space, the method can use the newly generated representations to train a classifier for the target domain.\nIn this subsection, we discussed feature augmentation methods, which focus on enriching the domain-invariant feature space while preserving domain-specific features. Various techniques are employed to achieve this. Some methods utilize projection matrices [62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###] or neural networks [65 ###reference_b65###], drawing on approaches similar to feature mapping, to construct the domain-invariant space. Additionally, a particularly prevalent and effective method known as matrix completion is often used to augment the feature space in heterogeneous domain scenarios [66 ###reference_b66###, 67 ###reference_b67###].\nFor data-based methods, we delve into their intricacies, providing a comprehensive examination of their workings and nuances. While the effectiveness of data-based methods is well-documented, they do have limitations. Their primary drawback is the prerequisite for extensive training data from at least one of the domains, combined with the demand for substantial computational resources for parameter learning. This can pose challenges in scenarios with restricted data availability. Furthermore, the dependence on incorporating source data can raise significant data privacy concerns, especially when handling sensitive or proprietary information, thereby limiting the applicability of these methods in various domains.\nTo tackle these challenges, the paradigm of transferring well-developed models from the source domains offers an attractive alternative. We explore this avenue further in the subsequent section on model-based methods." | |
| }, | |
| { | |
| "section_id": "3.2.1", | |
| "parent_section_id": "3.2", | |
| "section_name": "3.2.1 Feature Mapping", | |
| "text": "Feature mapping refers to the process of transforming or encoding input features into new representations that are better suited for specific tasks or analysis. In the context of traditional feature mapping, the objective is to extract informative features from the original data. This transformation can utilize various techniques depending on the nature of the data and the specific tasks involved. For example, Principal Component Analysis (PCA) [83 ###reference_b83###] is an unsupervised dimensionality reduction technique that aims to reduce the data dimensionality and retain the most informative features by maximizing its variance after transformation. With label information, Linear Discriminant Analysis (LDA) [84 ###reference_b84###] is a supervised dimensionality reduction technique. Its primary objective is to find a projection that not only reduces the dimensionality but also maximizes the distinction among different classes. By achieving this, LDA effectively transforms the data into a lower-dimensional space where class distinction is significantly improved.\nTo handle heterogeneity in the original feature spaces, feature mapping projects the original features of the source and target domains into an aligned feature space. This process seeks to extract valuable features from original data while capturing relevant information and harmonizing the distributions of both domains. Feature mapping techniques in HTL encompass various approaches, including linear transformations, nonlinear mappings, and more complex deep learning architectures.\nAs shown in Figure 3 ###reference_###, the feature mapping approaches can be categorized into two types: symmetric transformation and asymmetric transformation. As illustrated in Eq. 4 ###reference_###, the goal of symmetric feature mapping is to learn a pair of projections and , which map the source domain data and the target domain instances , respectively, into a shared feature space.\nIn contrast, asymmetric feature mapping methods focus on learning a single projection function . This function is used to map either the source features into the feature space of the target domain or vice versa. The ultimate goal of this approach is to find a transformation that adapts the features of one domain to those of the other, thereby minimizing the differences between and or and .\n\n###figure_3### We will first delve into asymmetric methods. One prominent example is the Information-Theoretic Metric Learning (ITML) method [47 ###reference_b47###], which employs a linear transformation matrix . This matrix facilitates the translation of target instances into the source domain through or conversely, morphs source instances into the target domain using . Despite its merits, ITML encounters constraints when the dimensionalities of both domains aren\u2019t equivalent, thereby confining it to homogeneous contexts. To address this limitation, the Asymmetric Regularized Cross-domain transformation (ARC-t) method [48 ###reference_b48###] learns the transformations in kernel space. This innovation allows the method to be applied in more general cases where the domains do not have the same dimensionality. Following this idea, asymmetric feature mapping can convert instances from one domain into another heterogeneous domain, thereby transforming a heterogeneous transfer learning problem into a homogeneous one.\nHaving established the foundational concepts in asymmetric feature mapping, the following paragraphs will delve deeper into specific examples to further elucidate these principles and demonstrate their practical applications.\nThe Cross-Domain Landmark Selection (CDLS) method [6 ###reference_b6### ###reference_b6###] establishes a common homogeneous space by projecting the target data into a subspace using PCA. To bring the source-domain data into this subspace, CDLS utilizes a feature transformation matrix denoted as which helps to eliminate domain difference. By learning , the technique aims to match the marginal distributions and , while also aligning the conditional distributions and .\nUtilizing information from label distributions, Supervised Heterogeneous Domain Adaptation via Random Forests (SHDA-RF) [7 ###reference_b7### ###reference_b7###] derives the pivots that serve as corresponding pairs, bridging the gap between the heterogeneous source and target domains.\nThe SHDA-RF process begins by identifying pivots from both the source and target random forest models, which share the same label distributions. These pivots act as connections between the heterogeneous feature spaces. Utilizing the derived pivots, the method estimates feature contribution matrices and . Subsequently, a projection matrix is learned from these matrices, enabling the mapping of source features to target features .\nInstead of relying on instance correspondences, Sparse Heterogeneous Feature Representation (SHFR) method [49 ###reference_b49### ###reference_b49###] learns the feature mapping function based on weight vectors and , assuming linear classifiers. By maximizing or minimizing , SHFR learns a mapping function that can align source and target domains effectively.\nWhile asymmetric feature mapping offers flexibility and ease of implementation with only one projection to learn [48 ###reference_b48### ###reference_b48###], symmetric feature mapping is more commonly employed due to its versatility in HTL. Symmetric feature mapping involves the transformation of both feature domains into a shared latent feature space. Specifically, symmetric feature mapping transforms the heterogeneous features into one shared space. The mapping transformation could be as simple as\nwhere and are projection matrices that map the source and target features into a common space .\nBy learning a common representation, symmetric feature mapping facilitates better alignment of feature distributions and enhances the generalization capability of the model by capturing underlying structures that are relevant to both domains.\nIn the following paragraphs, we will explore various approaches and algorithms that utilize symmetric feature mapping to address HTL challenges.\nHeterogeneous Spectral Mapping (HeMap) [8 ###reference_b8### ###reference_b8###] learns two linear transformation matrices using spectral embedding in the following optimization objective,\nwhere and are projected source and target data.\nThe primary objective of this optimization is to enable the projection to enhance data similarity while preserving inherent structural characteristics. Preserving structural information is of paramount importance, particularly for accurate data classification [85 ###reference_b85### ###reference_b85###].\nThe Domain Adaptation by Covariance Matching (DACoM) [50 ###reference_b50### ###reference_b50###] introduces transformations that incorporate the zero-mean characteristics into the mapped features. Specifically, it performs the following transformations to automatically make the two first moments equal:\nwhere and denote the means of and , respectively. By doing so, the first moments are automatically equal and DACoM minimizes the gap of their covariance matrices in both domains to learn more consistent distributions of the projected instances.\nGiven multiple heterogeneous source domains, Heterogeneous Domain Adaptation using Manifold Alignment (DAMA) [9 ###reference_b9### ###reference_b9###] considers each domain as a manifold, represented by a Laplacian matrix constructed from an affinity graph that captures relationships among instances. DAMA aims to reduce the dimensionality of feature space while preserving manifold topology through generalized eigenvalue decomposition. This process generates a lower dimensional feature space that can be utilized for transfer learning across domains. However, DAMA assumes that the data follows a manifold structure.\nWhile geometric manifold structures are pivotal as discussed in previous methods, other latent factors also play a crucial role in establishing a connection between the source and target domains. Factors such as landmark instances, which are a select subset of labeled source instances closely distributed to the target domain, are of particular importance. Locality Preserving Joint Transfer (LPJT) method [51 ###reference_b51### ###reference_b51###] proposes a unified objective to optimize all aspects at the same time.\nThe transformation matrices are learned by minimizing the marginal and conditional MMD between the common space of source and target domains, reducing domain shifts while preserving local manifold structures through the minimization of intra-class instance distance and the maximization of inter-class instance distance. By doing so, the LPJT method establishes a connection between heterogeneous source and target domains. Additionally, the LPJT method incorporates a re-weighting strategy for landmark selection, which aids in selecting pivot instances as bridges for effective knowledge transfer.\nThe Information Capturing and Distribution Matching (ICDM) method [52 ###reference_b52### ###reference_b52###] introduces a similar approach to LPJT by utilizing MMD for aligning domain distributions but extends its scope beyond distribution matching. ICDM places emphasis on preserving original feature information through the minimization of reconstruction loss between original and reconstructed data. ICDM can capture and maintain the essential characteristics of original features during the domain adaptation process.\nIn HTL, a recurrent challenge is the scarcity of label information within the target domain. This sparsity underscores the paramount importance of effectively harnessing whatever limited labels are available in the target setting [86 ###reference_b86### ###reference_b86###]. In response to this challenge, several methods have been formulated. Some methods use label information to enforce the similarity of projected data points in the same class across different domains. Others incorporate a supervised classification loss to the objective function.\nCross-Domain Structure Preserving Projection (CDSPP) algorithm [53 ###reference_b53### ###reference_b53###] incorporates a symmetric feature mapping approach to enforce the proximity of the projected instances belonging to the same class, regardless of their original domains, using the similarity matrix of the training instances derived from the label information.\nSoft Transfer Network (STN) [54 ###reference_b54### ###reference_b54###] simultaneously learns a domain-shared subspace and a classifier .\nThe STN constructs two projection networks that are dedicated to mapping the data from both the source and target domains, and , into and respectively, within a common domain-invariant subspace. The optimization process involves minimizing the classification loss calculated over source instances and labeled target instances, together represented as and their corresponding labels . Additionally, a Soft Maximum Mean Discrepancy (Soft-MMD) loss is employed to align both the marginal and conditional distributions between the domains.\nThe objective function of STN includes a classification loss and Soft-MMD loss together as:\nThe Soft-MMD is an extension of the MMD concept. The MMD mainly focuses on the divergence in marginal distributions. The Soft-MMD further accounts for the discrepancies in conditional distributions across different domains. The Soft-MMD is defined as,\nand\nHere, denotes the adaptive coefficient with as the total number of iterations and indicating the current iteration. To address the scarcity of labeled target instances, Soft-MMD leverages the unlabeled target data and assigns -dimensional soft labels , which represent the probabilities of the projected data belonging to categories. This approach also introduces an adaptive coefficient that gradually increases the weight of the predicted labels.\nSemantic Correlation Transfer (SCT) [55 ###reference_b55### ###reference_b55###] aims to transfer knowledge of semantic correlations among categories from the source domain to the target domain. The method measures semantic correlations by cosine similarity between different local centroids in the source domain. To achieve this, SCT uses two projection functions to map source and target features into a shared space. The optimization process involves minimizing a loss function that encompasses several components: the discrepancy in marginal distribution, the discrepancy in conditional distribution, the discrepancy in cosine distances among classes across both domains and the supervised classification loss. Through this approach, SCT not only encourages the learning of domain-invariant features that reduce the mixing of features from different classes but also enhances the discriminative ability of categories within the target domain.\nMany HTL methods focus on addressing either feature discrepancy or distribution divergence one at a time. However, optimizing one can enhance the other. Subsequently, some methods further optimize both of them simultaneously.\nHeterogeneous Domain Adaptation Through \nProgressive Alignment (HDAPA) [56 ###reference_b56### ###reference_b56###] simultaneously optimizes feature difference and distribution divergence. This method maps the domain features into new representations in a shared latent space, using two domain-specific projections and a common codebook . It uses the MMD metric (5 ###reference_### ###reference_###) to measure distribution divergence. By solving the variables alternatively using the objective function illustrated as follows,\nThe algorithm progressively learns the new representations for source and target domains.\nSimilarly, Heterogeneous\nAdversarial Neural Domain\nAdaptation (HANDA) [57 ###reference_b57### ###reference_b57###] conducts both feature and distribution alignment within a unified neural network architecture. The method achieves this by using a shared dictionary learning approach to project heterogeneous features into a common latent space, thereby handling heterogeneity while alleviating feature discrepancy. An adversarial kernel matching method is then employed to reduce distribution divergence. Finally, a shared classifier is used to minimize the shared classification loss.\nNevertheless, lower-order statistics do not always fully characterize the heterogeneity of the domains [87 ###reference_b87### ###reference_b87###]. Some methods employ neural network based structures to map the heterogeneous feature domains to one shared representation space.\nTransfer Neural Trees (TNT) method [60 ###reference_b60### ###reference_b60###, 88 ###reference_b88### ###reference_b88###] jointly solves cross-domain feature mapping, adaptation, and classification in a neural network based architecture. TNT learns the source and target feature mapping and respectively and updates them to minimize the prediction error for the labeled source data and target domain data . Due to the lack of label information for the unlabeled target-domain data , the method preserves the prediction and structural consistency between and to learn with .\nIn this subsection, we discussed feature mapping methods, which present sophisticated approaches to bridge the gap between source and target domains in HTL by projecting them into a shared, domain-invariant subspace. The feature mapping methods discussed can be categorized into asymmetric [47 ###reference_b47### ###reference_b47###, 48 ###reference_b48### ###reference_b48###, 6 ###reference_b6### ###reference_b6###, 7 ###reference_b7### ###reference_b7###, 49 ###reference_b49### ###reference_b49###] and symmetric transformations, with symmetric transformation being the predominant type. These methods aim to align the source and target domains by considering various factors that include domain distribution [50 ###reference_b50### ###reference_b50###], manifold structure [9 ###reference_b9### ###reference_b9###], and landmark selection [51 ###reference_b51### ###reference_b51###]. Given that target label information is often limited, some methods [53 ###reference_b53### ###reference_b53###, 54 ###reference_b54### ###reference_b54###, 55 ###reference_b55### ###reference_b55###] effectively utilize it by employing classification loss or enforcing similarity among instances within the same category. Regarding projection methods, approaches vary from using basic matrices [8 ###reference_b8### ###reference_b8###] and dictionary learning [56 ###reference_b56### ###reference_b56###, 57 ###reference_b57### ###reference_b57###] to employing neural networks [60 ###reference_b60### ###reference_b60###]." | |
| }, | |
| { | |
| "section_id": "3.2.2", | |
| "parent_section_id": "3.2", | |
| "section_name": "3.2.2 Feature Augmentation", | |
| "text": "Within feature-based methods in HTL, feature augmentation is another pivotal strategy to align the heterogeneous domains in a common subspace. Distinct from feature mapping methods, which predominantly search for domain-invariant representations, feature augmentation methods go a step further by incorporating domain-specific features. It augments the original domain-specific features with the domain-invariant features learned through transformations. By doing so, it not only learns a common subspace where the heterogeneous data can be compared but also keeps domain-specific patterns [89 ###reference_b89###].\nFeature augmentation methods were first applied in homogeneous transfer learning. Consider source domain feature and target domain feature , the features in source and target domains can be augmented to be and respectively [90 ###reference_b90###], where is a zero vector. In this way, the augmented feature has both domain-invariant and domain-specific spaces. However, in the context of HTL, direct concatenation of features becomes a challenge due to the dimensionality disparities between the domains. This necessitates a deeper dive into creating a common space for both domains. Consequently, the processes of heterogeneous feature augmentation become intertwined with heterogeneous feature mapping.\n\n###figure_4### In the Semi-supervised Heterogeneous Feature \nAugmentation (SHFA) method [61 ###reference_b61### ###reference_b61###] [62 ###reference_b62### ###reference_b62###], source features and target features are augmented as,\nwhere and are two projection matrices that map the source and target features into a shared common space ; and are zero vectors. By performing this feature augmentation, the heterogeneous source and target domains are effectively connected in a -dimensional common space, enabling the transfer of knowledge and information between the two domains.\nThe alternative strategy discards the concept of common feature space. Instead, it initializes the source and target features as,\nwhich reduces the dimensionality from to . This reduction can yield advantages in computational efficiency.\nDiscriminative Correlation Analysis (DCA) [63 ###reference_b63### ###reference_b63###] and Knowledge Preserving and Distribution Alignment (KPDA) [64 ###reference_b64### ###reference_b64###] augment the target features as where is a learnable matrix, which can avoid the problem of the curse of dimensionality in SHFA.\nEquipped with deep learning techniques, [65 ###reference_b65### ###reference_b65###] proposes Symmetric Generative Adversarial Networks (Sym-GANs) algorithm. This algorithm trains one Generative Adversarial Network (GAN) to map the source features to target features and another GAN for reverse mapping. Using labeled source domain data and target domain data , the Sym-GANs algorithm learns bidirectional mappings denoted by and . With these mappings, augmented features can be obtained:\nThese newly generated representations are then used for training a classifier of target instances for enhanced discriminative capability.\nSome methods assume that instances from both the source and target domains share identical feature spaces. As a result, they construct a unified instance-feature matrix that includes all instances across both domains. By addressing the matrix completion challenge and subsequently reconstructing the \u201cground-truth\u201d feature-instance matrix, they obtain enhanced features within this common space.\nGiven a set of labeled instances from source domain, unlabeled instances from source domain, unlabeled instances from target domain, and corresponding pairs between the source and target domains, Heterogeneous Transfer Learning through Active correspondences construction (HTLA) method [66 ###reference_b66### ###reference_b66###] first builds a unified instance-feature matrix for all the instances. To address missing data, zero-padding is employed, leading to the matrix , defined as,\nSubsequently, the missing entries within undergo a recovery procedure accomplished through a matrix completion mechanism that is based on distribution matching, particularly utilizing the MMD. The final result is the fully recovered and completed matrix . A singular value decomposition is then applied to ,\nresulting in the projection of domain data into a shared latent space defined by the top singular vectors, expressed as . This projection yields the transformed feature matrix . HTLA trains a classifier on the new feature representations of the source domain labeled data, comprising the first rows of , and applies it to predict on the target domain data, encompassing the last rows of .\nCorresponding pairs employed in HTLA can be missing in some situations and Multiple Kernel Learning (MKL) [67 ###reference_b67### ###reference_b67###] are proposed to address this problem. Given labeled source domain data and target domain data , including a few labeled instances and unlabeled ones, MKL augments the data using zero padding as follows,\nThe approach introduces two latent factor matrices: , which serves as the latent representations for matrix , and , which acts as the dictionary for matrix completion. This framework facilitates the matrix completion process, leading to the acquisition of a latent feature representation, denoted as .\nDifferent from previous methods that rely on conventional matrix completion techniques, Deep Matrix Completion with Adversarial Kernel Embedding (Deep-MCA) [68 ###reference_b68### ###reference_b68###] proposes a deep learning based framework. This approach employs an auto-encoder architecture denoted as to perform matrix completion on the augmented matrix defined in Eq. (17 ###reference_### ###reference_###) above, where represents decoder and represents encoder. By applying the encoder to the augmented features and , mapping them into a Reproducing Kernel Hilbert Space, the method can use the newly generated representations to train a classifier for the target domain.\nIn this subsection, we discussed feature augmentation methods, which focus on enriching the domain-invariant feature space while preserving domain-specific features. Various techniques are employed to achieve this. Some methods utilize projection matrices [62 ###reference_b62### ###reference_b62###, 63 ###reference_b63### ###reference_b63###, 64 ###reference_b64### ###reference_b64###] or neural networks [65 ###reference_b65### ###reference_b65###], drawing on approaches similar to feature mapping, to construct the domain-invariant space. Additionally, a particularly prevalent and effective method known as matrix completion is often used to augment the feature space in heterogeneous domain scenarios [66 ###reference_b66### ###reference_b66###, 67 ###reference_b67### ###reference_b67###].\nFor data-based methods, we delve into their intricacies, providing a comprehensive examination of their workings and nuances. While the effectiveness of data-based methods is well-documented, they do have limitations. Their primary drawback is the prerequisite for extensive training data from at least one of the domains, combined with the demand for substantial computational resources for parameter learning. This can pose challenges in scenarios with restricted data availability. Furthermore, the dependence on incorporating source data can raise significant data privacy concerns, especially when handling sensitive or proprietary information, thereby limiting the applicability of these methods in various domains.\nTo tackle these challenges, the paradigm of transferring well-developed models from the source domains offers an attractive alternative. We explore this avenue further in the subsequent section on model-based methods." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "Model-based Method", | |
| "text": "Model-based methods in HTL primarily focus on transferring a source domain\u2019s model structure and parameters to a target domain. Specifically, given source data and source labels , a source model is initially trained to obtain the optimal parameters . Subsequently, these parameters guide the formulation of the parameters in the target model .\nTwo primary strategies are employed to leverage to influence : parameter regularization and parameter tuning. Parameter regularization methods involve learning target models with a regularization term . The target model\u2019s parameters start with random initialization and are adjusted to align with the characteristics of the target domain, while being regularized to prevent significant deviation from . In contrast, parameter tuning initially sets the parameters to be equal to and subsequently adapts them to the target domains through fine-tuning. This strategy ensures that the target model parameters are initially aligned with those of the source model, and are later refined to accommodate the distinct characteristics of the target datasets." | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "Parameter Regularization Method", | |
| "text": "Parameter regularization methods, as shown in Fig. 5 ###reference_###, aim to bridge the gap between the parameters of source and target models by introducing regularizers on their parameters. These techniques serve a dual purpose. First, they encourage the target models to embrace similar parameter values as those of the source models, thereby enabling them to leverage the general knowledge and patterns acquired from the source domain. Second, these methods provide the target models the flexibility needed to adapt to the distinct characteristics of the target domain. This adaptability is instrumental in enhancing the accuracy of the target model and safeguarding against over-fitting, a common concern when dealing with limited data from the target domain.\n\n###figure_5### However, it is worth noting that the widely existing difference between source and target feature spaces presents unique challenges, especially in scenarios involving multiple modalities. The model parameters learned in one domain may not be directly applicable to another domain due to variations in feature spaces. This disparity necessitates alignment processes to ensure the effective transfer of knowledge between source and target domains.\nThe REctiFy via heterOgeneous pRedictor Mapping (REFORM) [70 ###reference_b70###] employs a semantic mapping to handle heterogeneity in either the feature or label space.\nBy applying the semantic map , a source model\u2019s parameters are transformed to provide biased regularization that reflects prior knowledge for the target task\u2019s parameters as in\nThe REFORM deduces the semantic map by learning a transformation matrix . This matrix transforms the representation into for the heterogeneous feature space. Similarly, REFORM can accommodate a heterogeneous label space by modifying .\nWeakly-shared Deep Transfer Networks method (DTNs) [69 ###reference_b69###] employs two -layer stacked auto-encoders to derive aligned hidden representations from two heterogeneous domains. These aligned representations subsequently serve as input for the next sequence of -layer models specific to each domain. Rather than directly enforcing parameter sharing across domains, DTNs opt for separate series of layers structured as follows,\nwhere and are the encoders in source and target domains respectively, and and denote the -th layer hidden representation in source and target domain respectively.\nUnder the assumption of weak parameter sharing, this method introduces a regularizer that governs the differences only between the parameters of the last few layers as\nThis design choice allows the initial layers to learn domain-specific features, while the concluding layers specialize in identifying sharable knowledge across domains..\nParameter regularization methods, while effective in specific scenarios, can become time-consuming, particularly when dealing with significant domain shifts between the source and target domains. This is because they rely on random initialization, which can hinder their effectiveness in adapting to domain-specific patterns. To address these challenges, parameter tuning methods have been introduced as an alternative solution." | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "Parameter Tuning Method", | |
| "text": "Parameter tuning methods in HTL, illustrated in Fig. 6 ###reference_###, are designed to enhance the abilities of pre-trained models to perform tasks for which they have not been extensively trained. The goal is to adeptly tune the parameters of these models, enabling their adaptation and specialization for various downstream tasks across different domains.\nParameter tuning methods encompass two distinct phases: pre-training and fine-tuning.\nIn the pre-training phase, a model is trained on extensive, diverse datasets for general tasks, often broader in scope than the specific target tasks. This enables the model to capture general patterns, providing valuable insights applicable to a variety of downstream tasks.\nIn the subsequent fine-tuning phase, the pre-trained model\u2019s parameters are fine-tuned on smaller, task-specific target datasets. This process tailors the encoded features to the particular task. By leveraging knowledge from pre-training, the final model can potentially outperform one trained from scratch, especially when target labeled data is limited, on the target task.\n\n###figure_6### One distinctive advantage of parameter tuning methods, which sets them apart from parameter regularization methods, is their effectiveness in reducing computational demands. By leveraging pre-trained models, these methods gain a strategic upper hand by initializing optimization processes from advantageous positions within the optimization landscape. This leads to faster convergence compared to starting the optimization process from random initial points.\nThe parameter tuning methods have proven highly effective across various domains, notably illustrated by the widespread application of models pre-trained on ImageNet [91 ###reference_b91###, 92 ###reference_b92###] in the field of CV, and the utilization of BERT [34 ###reference_b34###]\nin NLP tasks. These examples underscore the versatility and efficacy of parameter tuning methods in diverse applications, details of which will be explored in the following sections.\nFine-tuning methods, when distinguished based on the layers subjected to modification, fall into two categories: full and partial fine-tuning. Full fine-tuning necessitates that every layer of the pre-trained models be further trained using task-specific data. This comprehensive adjustment enables the model to tailor its parameters to the specificities of the target domain. [96 ###reference_b96###] shows that, for the localization task in the ImageNet Large Scale Visual Recognition Challenge [97 ###reference_b97###], fine-tuning all layers outperforms tuning only the fully connected layers. However, as indicated in [74 ###reference_b74###], direct knowledge transfer from source data might not always be optimal due to potential biases or even negative influences on the target class in certain scenarios. In such instances, partial fine-tuning methods could provide a viable alternative. In partial fine-tuning methods, only a subset of layers within the pre-trained models is modified while the rest remain frozen, preserving their pre-trained knowledge and ensuring the retention of general features and representations.\nPartial fine-tuning proves particularly valuable when dealing with smaller task-specific datasets, mitigating overfitting risks and leveraging pre-existing knowledge. Notably, while the common approach leans toward fine-tuning the final layers, studies [75 ###reference_b75###] have underscored the occasional benefit of tuning initial or middle layers for certain tasks. Despite the considerable advantages of utilizing pre-trained models, their local fine-tuning can be computationally intensive and challenging. To address this issue, Offsite-Tuning [98 ###reference_b98###] has been proposed, offering a privacy-preserving and efficient transfer learning framework. In this approach, the first and final layers of the pre-trained model function as an adapter, with the remaining layers compressed into an entity referred to as an emulator. This structure enables the fine-tuning of the adapter using the target data, guided by the static emulator. Subsequently, the fine-tuned adapter is plugged into the original full pre-trained model, enhancing its performance on specified tasks. Besides computational challenges, fine-tuning can reduce robustness to distribution shifts. Robust fine-tuning might be achieved by linearly interpolating between the weights of the original zero-shot and fine-tuned models [99 ###reference_b99###]. Averaging the weights of multiple fine-tuned models with different hyperparameter configurations was shown to improve accuracy without increasing inference time [100 ###reference_b100###]." | |
| }, | |
| { | |
| "section_id": "4.2.1", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.1 Pre-training in NLP", | |
| "text": "In the NLP area, pre-training aims to learn patterns and probabilistic relationships from large amounts of text data. The primary objective of pre-trained language models is to estimate the likelihood of a sequence of words as in Eq. (21 ###reference_###) or the likelihood of a sequence of words based on the context of the preceding words as in Eq. (22 ###reference_###)\nwhere denotes the -th word in one sentence. These models tackle a broad spectrum of NLP tasks, including but not limited to text prediction, text generation, text completion, and language understanding.\nAmong language models that have been pre-trained, transformer-based models have recently emerged as the most dominant. The transformer model, a neural network architecture introduced by [33 ###reference_b33###], is grounded in the concept of a multi-head self-attention mechanism. This mechanism allows the model to capture global dependencies and relationships among words in a sequence. Stemming from the transformer, two typical model structures have been developed: autoencoding models and autoregressive models.\nAutoencoding models aims to to learn compact representations of the input text in an unsupervised manner, typically designed for dimensionality reduction and feature learning. An autoencoder achieves this through two primary components: the encoder and the decoder. The encoder compresses the input data into lower-dimensional latent representations, and the decoder attempts to reconstruct the original input data from these compressed representations. The most renowned autoencoding model is BERT, which employs Masked Language Modeling to learn contextualized word representations. A percentage of the input tokens are randomly masked.\nThe model is then trained to predict these masked tokens based on their surrounding context. This bidirectional training allows BERT to capture both the left and right context of a word, enabling it to learn deep contextual representations. Furthermore, during pre-training, BERT utilizes Next Sentence Prediction to understand the relationships between sentences by providing pairs of sentences to the model and training it to predict whether the second sentence logically follows the first sentence in the original text. This task helps BERT learn sentence-level representations and capture discourse-level information.\nAutoregressive models adopt a decoder-only structure to model the conditional probability distribution of the succeeding token given the previous tokens in the sequence. These models are typically designed for text generation, dialogue generation, and machine translation. A key characteristic of autoregressive models is their dependence on previously generated tokens to inform the generation of subsequent tokens. During the pre-training process, the model predicts the next word or token in a sequence based on the preceding words or tokens. This sequential nature allows them to capture contextual information, thereby producing coherent and contextually relevant text. Notable autoregressive language models include the GPT series [35 ###reference_b35###, 71 ###reference_b71###, 72 ###reference_b72###, 73 ###reference_b73###].\nRecently, ChatGPT, building upon the foundation of GPT-3.5, has emerged as a noteworthy advancement in the field of pre-trained models. Its success stems from the incorporation of reinforcement learning utilizing human feedback, a methodology that iteratively refines the model\u2019s alignment with user intent. By integrating capabilities from GPT-4, a significant multimodal model capable of processing both image and text inputs, ChatGPT evolves into a versatile problem-solving tool, proficient in producing text-based outputs for a wide array of tasks. The great triumph of ChatGPT, akin to others, can be primarily attributed to the use of a vast and diverse corpus of data, which encompasses various forms and tasks for extensive pre-training to obtain extremely large models. This comprehensive training empowers it to adeptly comprehend and generate language [93 ###reference_b93###, 94 ###reference_b94###]." | |
| }, | |
| { | |
| "section_id": "4.2.2", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.2 Pre-training in CV", | |
| "text": "In the CV area, pre-training has emerged as a strategy to address challenges posed by limited labeled data and complex visual tasks, capturing low-level visual features, such as edges, textures, and colors, from a vast amount of source data. Through learning these visual representations, pre-trained models can discern essential visual cues and patterns. Subsequently, these pre-trained models serve as a foundational starting point for more specific CV tasks, including image classification and object recognition tasks.\nPre-training in CV has proven particularly valuable in scenarios where domain-specific data is either scarce or expensive to obtain. Models pre-trained on generic datasets, such as ImageNet, have exhibited consistent improvements when adapting to various domain-specific CV tasks [91 ###reference_b91###, 92 ###reference_b92###, 95 ###reference_b95###]. For instance, in medical imaging, acquiring labeled data often requires expert annotations and incurs significant costs. Utilizing models pre-trained on general datasets substantially boosts model performance without requiring extensive labeled medical data [95 ###reference_b95###].\nAnother advantage of pre-trained models in CV is their ability to expedite the training process. Initializing a model with parameters from pre-trained models, instead of random initialization, can promote faster convergence and better local minima during the optimization process. This is particularly beneficial when working with large-scale image datasets, where training a deep network from scratch might be computationally prohibitive." | |
| }, | |
| { | |
| "section_id": "4.2.3", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.3 Fine tuning", | |
| "text": "Upon completing the pre-training phase, models enter the fine-tuned process, adapting to specific downstream tasks. The fine-tuning process enables the pre-trained models to adapt their learned representations to target domains, thereby enhancing their performance on particular tasks. Various strategies have emerged to navigate the fine-tuning process, including using smaller learning rates, applying reduced learning rates to initial layers, strategically freezing and then gradually unfreezing layers, or exclusively reinitializing the final layer. In scenarios where a pronounced disparity exists between the source pre-training tasks and the target application, extensive fine-tuning of the entire network may become requisite.\nThese fine-tuning methodologies can be classified based on criteria such as which layers are modified and the amount of task-specific data leveraged. Subsequent sections will discuss two key categories in these fine-tuning techniques:\nFine-tuning methods, when distinguished based on the layers subjected to modification, fall into two categories: full and partial fine-tuning. Full fine-tuning necessitates that every layer of the pre-trained models be further trained using task-specific data. This comprehensive adjustment enables the model to tailor its parameters to the specificities of the target domain. [96 ###reference_b96### ###reference_b96###] shows that, for the localization task in the ImageNet Large Scale Visual Recognition Challenge [97 ###reference_b97### ###reference_b97###], fine-tuning all layers outperforms tuning only the fully connected layers. However, as indicated in [74 ###reference_b74### ###reference_b74###], direct knowledge transfer from source data might not always be optimal due to potential biases or even negative influences on the target class in certain scenarios. In such instances, partial fine-tuning methods could provide a viable alternative. In partial fine-tuning methods, only a subset of layers within the pre-trained models is modified while the rest remain frozen, preserving their pre-trained knowledge and ensuring the retention of general features and representations.\nPartial fine-tuning proves particularly valuable when dealing with smaller task-specific datasets, mitigating overfitting risks and leveraging pre-existing knowledge. Notably, while the common approach leans toward fine-tuning the final layers, studies [75 ###reference_b75### ###reference_b75###] have underscored the occasional benefit of tuning initial or middle layers for certain tasks. Despite the considerable advantages of utilizing pre-trained models, their local fine-tuning can be computationally intensive and challenging. To address this issue, Offsite-Tuning [98 ###reference_b98### ###reference_b98###] has been proposed, offering a privacy-preserving and efficient transfer learning framework. In this approach, the first and final layers of the pre-trained model function as an adapter, with the remaining layers compressed into an entity referred to as an emulator. This structure enables the fine-tuning of the adapter using the target data, guided by the static emulator. Subsequently, the fine-tuned adapter is plugged into the original full pre-trained model, enhancing its performance on specified tasks. Besides computational challenges, fine-tuning can reduce robustness to distribution shifts. Robust fine-tuning might be achieved by linearly interpolating between the weights of the original zero-shot and fine-tuned models [99 ###reference_b99### ###reference_b99###]. Averaging the weights of multiple fine-tuned models with different hyperparameter configurations was shown to improve accuracy without increasing inference time [100 ###reference_b100### ###reference_b100###]." | |
| }, | |
| { | |
| "section_id": "4.2.4", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.4 Handling Heterogeneity of Feature Spaces", | |
| "text": "Adapting pre-trained models to specialized target datasets introduces challenges, particularly in reconciling heterogeneity in input dimensions between the pre-trained model and the target data.\nIn the field of NLP, early research utilized feature transfer approaches in pre-training methods, focusing on integrating learned feature representations, such as word embeddings, into target tasks. These endeavors aimed to capture semantic information from extensive source datasets and transfer knowledge to target domains with limited resources. But it is important to highlight that word embeddings may display heterogeneity across diverse datasets, due to various factors such as data sources, languages, or contexts.\nWith the advent of transformer-based models in 2017, pre-training in NLP has shifted its focus toward parameter transfer methods. Unlike their predecessors, parameter transfer methods assume that the source and target domains share common model structures, parameters, or prior distributions of hyperparameters. Instead of transferring features produced by previous encoders as in feature transfer, the parameter transfer directly shares the model structure and parameters of the pre-trained models.\nBy implicitly encoding semantic information into the model parameters, these models eliminate the need for the separate word embedding step inherent in previous feature transfer approaches. Instead, the input is represented as a collection of words or tokens in the language, addressing the heterogeneity in feature spaces across different domains. This innovative approach ensures that representations in varied domains are inherently homogeneous, thereby effectively handling the discrepancies in feature spaces without the necessity for explicit preprocessing.\nIn the field of CV, addressing heterogeneity in feature spaces during pre-training can be challenging, especially when interfacing with datasets with varying image sizes, resolutions, or modalities. Simple data preprocessing often include actions such as resizing or cropping images to a fixed size, converting images to a standard color space, or normalizing pixel values [101 ###reference_b101###, 102 ###reference_b102###, 103 ###reference_b103###, 104 ###reference_b104###]. An alternative technique is feature extraction, which transforms images using a feature extractor to align with the input size of the pre-trained model.\nFor example, ProteinChat [105 ###reference_b105###] uses a projection layer as a feature extractor, enabling a smooth and effective connection between the protein images and the subsequent pre-trained large language model.\nAnother example is the Vision Transformer (ViT) [106 ###reference_b106###], which was inspired by the natural capability of using \u201ctokens\u201d to handle heterogeneity in NLP. ViT treats images as sequences of flattened patches, where each patch is linearly embedded and then processed by the transformer architecture. The transformer can efficiently capture long-range dependencies across patches using self-attention mechanisms. ViT also incorporates positional embeddings to preserve the spatial context, which gets lost amidst the patch-based transformation. Upon being pre-trained on large, diverse datasets, ViT can extract meaningful and universal features, thereby demonstrating adeptness at dealing with heterogeneity. Its inherent design facilitates bridging disparities between diverse datasets by comprehending both local and global image features, eliminating the necessity for explicit spatial operations, and thus maintaining homogeneity in feature spaces.\nAnother interesting example is the Visual-Linguistic BERT [107 ###reference_b107###], which further develops a unified architecture based on transformers to craft pre-trainable generic representations suited for visual-linguistic tasks. This model is capable of accepting both visual and linguistic embedded features as input. Each element of the input constitutes either a word from the input sentence or a region-of-interest from the input image. While their content features are domain-specific, the representation generated through multiple multi-modal transformer attention modules, is proficient in aggregating and aligning visual-linguistic information." | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "Application Scenarios", | |
| "text": "###table_2### In this section, we will delve into the utilization of HTL methods in specific areas, including NLP, CV, Multimodality, and Biomedicine, as outlined in Table 2 ###reference_### and illustrated in Figure 7 ###reference_###. Through a detailed examination of methods in each of these domains, we aim to uncover the challenges and progress across diverse application contexts. Additionally, we highlight prominent datasets for HTL research, providing comprehensive details and referencing the specific methods that employed them, as detailed in Table 3 ###reference_###.\n\n###figure_7###" | |
| }, | |
| { | |
| "section_id": "5.1", | |
| "parent_section_id": "5", | |
| "section_name": "Natural Language Processing", | |
| "text": "Transfer learning has emerged as a valuable approach in NLP to address the challenge of scarce labeled data in specific scenarios [25 ###reference_b25###]. In the context of object classification tasks, several methods [7 ###reference_b7###, 110 ###reference_b110###, 111 ###reference_b111###] leverage information from various domains and apply it to target domains for classifying documents in 20 Newsgroups text collection dataset.\nFor sentiment analysis tasks, Multi-Domain Sentiment Dataset [130 ###reference_b130###] contains Amazon product reviews for four different product categories: books, DVDs, electronics, and kitchen appliances. By selecting one of these domains as the target domain, HTL methods [110 ###reference_b110###, 62 ###reference_b62###, 64 ###reference_b64###, 66 ###reference_b66###, 49 ###reference_b49###] can effectively transfer insights and expertise from the remaining categories, enhancing model robustness and accuracy in domain-specific sentiment analysis.\nObtaining labeled data can be particularly challenging in low-resource languages. Transfer learning has emerged as a valuable strategy to mitigate this challenge by facilitating knowledge transfer from well-resourced languages, such as English, to low-resource languages. For example, various methods [112 ###reference_b112###, 113 ###reference_b113###, 114 ###reference_b114###, 115 ###reference_b115###, 56 ###reference_b56###, 54 ###reference_b54###, 64 ###reference_b64###, 66 ###reference_b66###, 67 ###reference_b67###, 45 ###reference_b45###, 6 ###reference_b6###, 46 ###reference_b46###, 49 ###reference_b49###, 53 ###reference_b53###, 50 ###reference_b50###, 109 ###reference_b109###, 110 ###reference_b110###, 111 ###reference_b111###] have been developed to enable this information transfer across languages. These methods utilize multilingual datasets like the Multilingual Reuters Collection Dataset [116 ###reference_b116###] and the Multilingual Amazon Reviews Corpus [131 ###reference_b131###], covering languages including English, French, German, Italian, Spanish, Japanese, and Chinese. By employing these datasets, models are able to capture universal contextual dependencies and linguistic patterns that are shared across languages, thereby enhancing performance in NLP tasks across diverse linguistic settings." | |
| }, | |
| { | |
| "section_id": "5.2", | |
| "parent_section_id": "5", | |
| "section_name": "Computer Vision", | |
| "text": "Transfer learning is widely applied in CV for several reasons. Firstly, it facilitates the transfer of knowledge from pre-trained models on large-scale datasets, such as ImageNet, to new tasks or domains with limited labeled data. This process not only saves time but also conserves computational resources. Secondly, transfer learning leverages shared visual features among different CV tasks, enabling faster model development and improved performance. Lastly, it addresses the challenge of domain shift by adapting models to variations in lighting, viewpoint, or image quality, thereby enhancing their robustness and generalization across different visual environments. Overall, transfer learning accelerates training, improves performance, and enhances the applicability of CV in various domains, including image classification, object recognition, image segmentation, person re-identification,.\nOne of the widely recognized tasks in HTL within the field of CV is cross-domain object recognition. For this purpose, the commonly employed dataset is an amalgamation of the Office and Caltech-256 datasets. The Office dataset [47 ###reference_b47###] includes images sourced from three distinct origins: images obtained from Amazon, high-resolution images captured with a digital SLR camera, and lower-resolution images taken using a web camera [117 ###reference_b117###, 118 ###reference_b118###, 113 ###reference_b113###, 115 ###reference_b115###]. By integrating images from the Caltech-256 dataset, which forms the fourth category, the resultant Office + Caltech-256 dataset is compiled by selecting categories that overlap between both datasets [119 ###reference_b119###, 87 ###reference_b87###, 51 ###reference_b51###, 6 ###reference_b6###, 56 ###reference_b56###, 54 ###reference_b54###, 63 ###reference_b63###, 67 ###reference_b67###, 68 ###reference_b68###, 52 ###reference_b52###, 53 ###reference_b53###].\nIn the broader field of CV, diverse datasets are utilized for specialized tasks. For example, the CIFAR-10 and CIFAR-100 datasets are essential in image classification tasks and are invaluable for assessing knowledge transfer across varied categories [120 ###reference_b120###]. The UCI dataset [132 ###reference_b132###], particularly noted for tasks centered around handwritten digit recognition [121 ###reference_b121###], has proven to be a reliable resource. Furthermore, a notable study [122 ###reference_b122###] examines the selection of 3D objects from renowned datasets such as NTU [133 ###reference_b133###] and ModelNet40 [134 ###reference_b134###], exploring knowledge transfer in this context. In the area of heterogeneous face recognition, datasets such as CASIA [135 ###reference_b135###], NIVL [136 ###reference_b136###], and the CMU Multi-Pie dataset [137 ###reference_b137###] are frequently employed [123 ###reference_b123###, 124 ###reference_b124###, 118 ###reference_b118###]. These datasets collectively contribute to the exploration of knowledge transfer and transfer learning in CV applications." | |
| }, | |
| { | |
| "section_id": "5.3", | |
| "parent_section_id": "5", | |
| "section_name": "Multimodality", | |
| "text": "When learning with multimodal data, aligning feature spaces effectively presents significant challenges. In these scenarios, HTL becomes invaluable. Its strength lies in its ability to harness auxiliary data as intermediaries, facilitating a smooth information flow between modalities and effectively bridging the gap between source and target domains.\nMultimodal tasks often involve both images and text. For instance, consider the context of image classification as the target learning task, where a collection of text documents serves as auxiliary source data. In the research conducted in [45 ###reference_b45###, 108 ###reference_b108###], co-occurrence data, such as text-image pairs, serve as this intermediate data to establish a connection between the source and target domains. This type of data is often readily available and easily collected from social networks, providing a cost-effective solution for knowledge transfer. The representations of images can be enriched by incorporating high-level features and semantic concepts extracted from auxiliary images and text data [43 ###reference_b43###].\nAdditionally, the NUS-WIDE dataset [138 ###reference_b138###] finds common applications in text-to-image classification tasks. This extensive dataset comprises 45 tasks, each composed of 1200 text documents, 600 images, and 1600 co-occurred text-image pairs [139 ###reference_b139###]. This dataset can be extended by incorporating images from the ImageNet dataset as in [60 ###reference_b60###] or text-image pairs extracted from \u201cWikipedia Feature Articles\u201d [140 ###reference_b140###] as demonstrated in studies like [87 ###reference_b87###, 115 ###reference_b115###, 54 ###reference_b54###, 56 ###reference_b56###, 68 ###reference_b68###, 45 ###reference_b45###, 52 ###reference_b52###, 53 ###reference_b53###, 55 ###reference_b55###, 109 ###reference_b109###]." | |
| }, | |
| { | |
| "section_id": "5.4", | |
| "parent_section_id": "5", | |
| "section_name": "Biomedicine", | |
| "text": "Heterogeneity commonly exists in biomedicine: (a) Medical terminology undergoes continuous evolution, leading to the retirement of outdated terms and the introduction of novel ones. On occasions, these changes can be substantial, as exemplified by the transition from ICD-9 to ICD-10 coding systems; (b) The extensive adoption of electronic health record systems (EHRs) opens up substantial opportunities for deriving insights from routinely accumulated EHR data. However, the existence of distinct EHR structural templates and the utilization of local abbreviations for laboratory tests across various healthcare systems result in considerable heterogeneity among the collected data elements; (c) The potential of leveraging large language models and visual models in biomedicine may encounter challenges in effectively integrating and adapting to new data components, including medical terms, biomedical concepts (such as protein structures), and medical images.\nAddressing this heterogeneity is crucial, and HTL strategies have evolved over time. Previous HTL approaches include basic data augmentation, incorporation of prior knowledge into the source Bayesian network [125 ###reference_b125###], and a matrix projection method that only requires each source domain to share the empirical covariance matrix of the features [126 ###reference_b126###]. Recent explorations have begun to augment large fundamental models with biomedical data types, such as protein 3D structures [105 ###reference_b105###], drug compound graphs [127 ###reference_b127###], chest X-ray images [128 ###reference_b128###]. These data types are typically processed using encoding and projection layers to convert them into compatible formats for large foundational models. The training procedures often employ a partial fine-tuning strategy." | |
| }, | |
| { | |
| "section_id": "6", | |
| "parent_section_id": null, | |
| "section_name": "Discussion and Future Directions", | |
| "text": "HTL has emerged as a transformative approach in the realm of machine learning, addressing the complexities associated with divergent feature spaces, data distributions, and label spaces between source and target domains. This work aimed to offer a comprehensive examination of HTL in light of the recent advancements, particularly those made post-2017. As evidenced by the survey, HTL methodologies have shown significant promise, especially in fields such as NLP, CV, Multimodality, and Biomedicine. It offers a robust mechanism to tackle the challenges faced in data-intensive fields across domains. The surveyed methods and techniques underscore their adaptability and versatility across a range of scenarios. After a thorough review of the existing techniques in HTL, we would like to highlight some key insights, opportunities, and challenges in the domain of HTL." | |
| }, | |
| { | |
| "section_id": "7", | |
| "parent_section_id": null, | |
| "section_name": "Conclusion", | |
| "text": "Heterogeneous transfer learning (HTL) has become an essential tool in the modern landscape of machine learning, addressing the persistent challenge of data scarcity in real-world scenarios where source and target domains differ in feature or label spaces.\nThis survey offers a comprehensive examination over 60 methods, categorizing them into data-based and model-based approaches.\nBy systematically reviewing a wide range of recent methods, including instance-based, feature representation-based, parameter regularization, and parameter tuning techniques, we highlight the diversity of methodologies and their applications across various domains. Our comprehensive analysis of the underlying assumptions, calculations, and algorithms, along with a discussion of current limitations, offers valuable guidance for future research. This ensures that emerging HTL methods can address the identified gaps and advance the field. Moreover, by incorporating recent advancements like transformer-based models and multi-modal learning, we ensure that our survey reflects the latest developments and trends. This work not only bridges significant gaps in the literature but also serves as a crucial resource for researchers aiming to develop more robust and effective HTL techniques. The extensive coverage and critical insights offered by this survey are poised to stimulate further research and innovation in HTL, paving the way for its broader application and more significant impact in various real-world scenarios." | |
| } | |
| ], | |
| "appendix": [], | |
| "tables": { | |
| "1": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The summary of important references for different types of methods.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S2.T1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.1\">Important References</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S2.T1.1.2.1.1\">Data-based</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S2.T1.1.2.2\">Instance-based</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib42\" title=\"\">42</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib43\" title=\"\">43</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib44\" title=\"\">44</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib45\" title=\"\">45</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib46\" title=\"\">46</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib6\" title=\"\">6</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.1.3.1.1\">Feature-based</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.3.2\">Feature mapping</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.3.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib47\" title=\"\">47</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib48\" title=\"\">48</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib7\" title=\"\">7</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib49\" title=\"\">49</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib8\" title=\"\">8</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib9\" title=\"\">9</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib50\" title=\"\">50</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib51\" title=\"\">51</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib52\" title=\"\">52</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib55\" title=\"\">55</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib57\" title=\"\">57</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib58\" title=\"\">58</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib59\" title=\"\">59</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib60\" title=\"\">60</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.4.1\">Feature augmentation</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.4.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib61\" title=\"\">61</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib62\" title=\"\">62</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib63\" title=\"\">63</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib65\" title=\"\">65</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib66\" title=\"\">66</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib67\" title=\"\">67</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib68\" title=\"\">68</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.1.5.1.1\">Model-based</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S2.T1.1.5.2\">Parameter Regularization</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.5.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib69\" title=\"\">69</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib70\" title=\"\">70</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" colspan=\"2\" id=\"S2.T1.1.6.1\">Parameter Tuning</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.6.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib34\" title=\"\">34</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib35\" title=\"\">35</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib71\" title=\"\">71</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib72\" title=\"\">72</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib73\" title=\"\">73</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib74\" title=\"\">74</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib75\" title=\"\">75</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib76\" title=\"\">76</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib77\" title=\"\">77</a>]</cite></td>\n</tr>\n</table>\n</figure>", | |
| "capture": "Table 1: The summary of important references for different types of methods." | |
| }, | |
| "2": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The summary of application scenarios. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.1\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.1.1\">Application</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.2.1\">Reference</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.1\">NLP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib108\" title=\"\">108</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib66\" title=\"\">66</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib67\" title=\"\">67</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib45\" title=\"\">45</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib6\" title=\"\">6</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib46\" title=\"\">46</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib49\" title=\"\">49</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib50\" title=\"\">50</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib110\" title=\"\">110</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib111\" title=\"\">111</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib62\" title=\"\">62</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib25\" title=\"\">25</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib112\" title=\"\">112</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib113\" title=\"\">113</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib114\" title=\"\">114</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib115\" title=\"\">115</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib116\" title=\"\">116</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.1\">CV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib51\" title=\"\">51</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib6\" title=\"\">6</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib63\" title=\"\">63</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib67\" title=\"\">67</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib68\" title=\"\">68</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib52\" title=\"\">52</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib65\" title=\"\">65</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib44\" title=\"\">44</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib117\" title=\"\">117</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib118\" title=\"\">118</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib113\" title=\"\">113</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib115\" title=\"\">115</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib119\" title=\"\">119</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib87\" title=\"\">87</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib120\" title=\"\">120</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib121\" title=\"\">121</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib122\" title=\"\">122</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib123\" title=\"\">123</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib124\" title=\"\">124</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib118\" title=\"\">118</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.1\">Biomedicine</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib125\" title=\"\">125</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib126\" title=\"\">126</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib105\" title=\"\">105</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib127\" title=\"\">127</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib128\" title=\"\">128</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib9\" title=\"\">9</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib50\" title=\"\">50</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib129\" title=\"\">129</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.5.1\">Multimodality</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.5.2\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib42\" title=\"\">42</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib46\" title=\"\">46</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib69\" title=\"\">69</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib68\" title=\"\">68</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib45\" title=\"\">45</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib52\" title=\"\">52</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib55\" title=\"\">55</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib44\" title=\"\">44</a>]</cite></td>\n</tr>\n</table>\n</figure>", | |
| "capture": "Table 2: The summary of application scenarios. " | |
| }, | |
| "3": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figure class=\"ltx_figure ltx_minipage ltx_align_middle\" id=\"S5.T3.fig1\" style=\"width:433.6pt;\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_figure\">Table 3: </span>The summary of benchmark datasets.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T3.fig1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_tt\" id=\"S5.T3.fig1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.1.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.fig1.1.1.1.1.1.1\">Dataset</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.fig1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.fig1.1.1.2.1\">Year</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_tt\" id=\"S5.T3.fig1.1.1.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.1.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.fig1.1.1.3.1.1.1\">Task</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_tt\" id=\"S5.T3.fig1.1.1.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.1.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.fig1.1.1.4.1.1.1\">Method</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.2.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.2.1.1.1.1\">20 Newsgroups <span class=\"ltx_note ltx_role_footnote\" id=\"footnote2\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"http://qwone.com/~jason/20Newsgroups/\" title=\"\">http://qwone.com/~jason/20Newsgroups/</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.2.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.2.2.1\">1995</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.2.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.2.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.2.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.2.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.2.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.2.3.1.1.1.1.1\">Text Classification,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.2.3.1.1.1.1.2\">Topic Modeling</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.2.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.2.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.2.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.2.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib7\" title=\"\">7</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib110\" title=\"\">110</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib111\" title=\"\">111</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.3.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.3.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.3.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.3.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.3.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.3.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.4.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.4.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.4.1.1.1.1\">Multi-Domain Sentiment <span class=\"ltx_note ltx_role_footnote\" id=\"footnote3\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_tag ltx_tag_note\">3</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://www.cs.jhu.edu/~mdredze/datasets/sentiment/\" title=\"\">https://www.cs.jhu.edu/~mdredze/datasets/sentiment/</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.4.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.4.2.1\">2007</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.4.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.4.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.4.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.4.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.4.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.4.3.1.1.1.1.1\">Sentiment Analysis,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.4.3.1.1.1.1.2\">Text Classification</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.4.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.4.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.4.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.4.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib110\" title=\"\">110</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib62\" title=\"\">62</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib66\" title=\"\">66</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib49\" title=\"\">49</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.5.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.5.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.5.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.5.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.5.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.5.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.5.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.6.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.6.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.6.1.1.1.1\">Cross-Lingual Sentiment <span class=\"ltx_note ltx_role_footnote\" id=\"footnote4\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_tag ltx_tag_note\">4</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://zenodo.org/record/3251672\" title=\"\">https://zenodo.org/record/3251672</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.6.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.6.2.1\">2010</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.6.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.6.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.6.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.6.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.6.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.6.3.1.1.1.1.1\">Cross-Lingual</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.6.3.1.1.1.1.2\">Sentiment Analysis</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.6.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.6.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.6.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.6.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib62\" title=\"\">62</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib66\" title=\"\">66</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib49\" title=\"\">49</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.7.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.7.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.7.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.7.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.7.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.7.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.7.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.8.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.8.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.8.1.1.1.1\">Office <span class=\"ltx_note ltx_role_footnote\" id=\"footnote5\"><sup class=\"ltx_note_mark\">5</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">5</sup><span class=\"ltx_tag ltx_tag_note\">5</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://faculty.cc.gatech.edu/~judy/domainadapt/\" title=\"\">https://faculty.cc.gatech.edu/~judy/domainadapt/</a></span></span></span> + Caltech <span class=\"ltx_note ltx_role_footnote\" id=\"footnote6\"><sup class=\"ltx_note_mark\">6</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">6</sup><span class=\"ltx_tag ltx_tag_note\">6</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://www.vision.caltech.edu/datasets/\" title=\"\">https://www.vision.caltech.edu/datasets/</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.8.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.8.2.1\">2010</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.8.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.8.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.8.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.8.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.8.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.8.3.1.1.1.1.1\">Object Recognition,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.8.3.1.1.1.1.2\">Image Classification</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.8.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.8.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.8.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.8.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib119\" title=\"\">119</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib51\" title=\"\">51</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib6\" title=\"\">6</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib63\" title=\"\">63</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib67\" title=\"\">67</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib68\" title=\"\">68</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib52\" title=\"\">52</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.9.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.9.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.9.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.9.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.9.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.9.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.9.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.9.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.9.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.10\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.10.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.10.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.10.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.10.1.1.1.1\">Multilingual Reuters Collection <span class=\"ltx_note ltx_role_footnote\" id=\"footnote7\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_tag ltx_tag_note\">7</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://archive.ics.uci.edu/dataset/259/reuters+rcv1+rcv2+multilingual+multiview+text+categorization+test+collection\" title=\"\">https://archive.ics.uci.edu/dataset/259/reuters+rcv1+rcv2+multilingual+multiview+text+categorization+test+collection</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.10.2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.10.2.1\">2013</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.10.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.10.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.10.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.10.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.10.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.10.3.1.1.1.1.1\">Multilingual Classification,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.10.3.1.1.1.1.2\">Sentiment Analysis</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.10.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.10.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.10.4.1.1\" style=\"width:156.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib55\" title=\"\">55</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib113\" title=\"\">113</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib114\" title=\"\">114</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib115\" title=\"\">115</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib112\" title=\"\">112</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib66\" title=\"\">66</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib67\" title=\"\">67</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib45\" title=\"\">45</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib6\" title=\"\">6</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib46\" title=\"\">46</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib49\" title=\"\">49</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib50\" title=\"\">50</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib110\" title=\"\">110</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib111\" title=\"\">111</a>]</cite></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.11\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.11.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.11.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.11.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.11.1.1.1.1\">NUS-WIDE <span class=\"ltx_note ltx_role_footnote\" id=\"footnote8\"><sup class=\"ltx_note_mark\">8</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">8</sup><span class=\"ltx_tag ltx_tag_note\">8</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html\" title=\"\">https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html</a></span></span></span> + ImageNet <span class=\"ltx_note ltx_role_footnote\" id=\"footnote9\"><sup class=\"ltx_note_mark\">9</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">9</sup><span class=\"ltx_tag ltx_tag_note\">9</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://www.image-net.org/\" title=\"\">https://www.image-net.org/</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.11.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.11.2.1\">2015</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.11.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.11.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.11.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.11.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.11.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.11.3.1.1.1.1.1\">Image Classification</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.11.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.11.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.11.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.11.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib54\" title=\"\">54</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib56\" title=\"\">56</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib68\" title=\"\">68</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib45\" title=\"\">45</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib52\" title=\"\">52</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib55\" title=\"\">55</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib44\" title=\"\">44</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib60\" title=\"\">60</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.12\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.12.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.12.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.12.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.12.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.12.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.12.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.12.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.12.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.12.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.13\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.13.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.13.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.13.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.13.1.1.1.1\">Office-Home <span class=\"ltx_note ltx_role_footnote\" id=\"footnote10\"><sup class=\"ltx_note_mark\">10</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">10</sup><span class=\"ltx_tag ltx_tag_note\">10</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://www.hemanthdv.org/officeHomeDataset.html\" title=\"\">https://www.hemanthdv.org/officeHomeDataset.html</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.fig1.1.13.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.13.2.1\">2017</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.13.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.13.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.13.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.13.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.13.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.13.3.1.1.1.1.1\">Object Recognition,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.13.3.1.1.1.1.2\">Image Classification</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.13.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.13.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.13.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.13.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib53\" title=\"\">53</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib109\" title=\"\">109</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib51\" title=\"\">51</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.14\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.14.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.14.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.14.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.14.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.14.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.14.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top\" id=\"S5.T3.fig1.1.14.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.14.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.14.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.15\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.15.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.15.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.15.1.1.1\" style=\"width:128.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.15.1.1.1.1\">Multilingual Amazon Reviews <span class=\"ltx_note ltx_role_footnote\" id=\"footnote11\"><sup class=\"ltx_note_mark\">11</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">11</sup><span class=\"ltx_tag ltx_tag_note\">11</span><a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://registry.opendata.aws/amazon-reviews-ml/\" title=\"\">https://registry.opendata.aws/amazon-reviews-ml/</a></span></span></span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.fig1.1.15.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.15.2.1\">2020</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.15.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.15.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.15.3.1.1\" style=\"width:113.8pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.15.3.1.1.1\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.fig1.1.15.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.15.3.1.1.1.1.1\">Multilingual Sentiment Analysis,</span>\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.15.3.1.1.1.1.2\">Text Classification</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_t\" id=\"S5.T3.fig1.1.15.4\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.15.4.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.15.4.1.1\" style=\"width:156.5pt;\"><span class=\"ltx_text\" id=\"S5.T3.fig1.1.15.4.1.1.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib64\" title=\"\">64</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.08459v3#bib.bib110\" title=\"\">110</a>]</cite></span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.fig1.1.16\">\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.fig1.1.16.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.16.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.16.1.1.1\" style=\"width:128.0pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.fig1.1.16.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.16.2.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.16.2.1.1\" style=\"width:113.8pt;\"></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_top ltx_border_bb\" id=\"S5.T3.fig1.1.16.3\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T3.fig1.1.16.3.1\">\n<span class=\"ltx_p\" id=\"S5.T3.fig1.1.16.3.1.1\" style=\"width:156.5pt;\"></span>\n</span>\n</td>\n</tr>\n</table>\n</figure>\n</figure>", | |
| "capture": "Table 3: The summary of benchmark datasets." | |
| } | |
| }, | |
| "image_paths": { | |
| "1": { | |
| "figure_path": "2310.08459v3_figure_1.png", | |
| "caption": "Figure 1: The summary of approaches in heterogeneous transfer learning.", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/surveyPic_overall.jpg" | |
| }, | |
| "2": { | |
| "figure_path": "2310.08459v3_figure_2.png", | |
| "caption": "Figure 2: Instance-based method.", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/instance_based_1.jpg" | |
| }, | |
| "3": { | |
| "figure_path": "2310.08459v3_figure_3.png", | |
| "caption": "Figure 3: Two feature mapping methods: symmetric (upper) and asymmetric (lower). The asymmetric method depicts mapping from target to source dimensions (shown in the figure), providing an alternative approach to projecting source to target (not depicted in the figure).", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/feature_mapping_separate_0617.jpg" | |
| }, | |
| "4": { | |
| "figure_path": "2310.08459v3_figure_4.png", | |
| "caption": "Figure 4: Feature augmentation method.", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/Feature_augmentation.jpg" | |
| }, | |
| "5": { | |
| "figure_path": "2310.08459v3_figure_5.png", | |
| "caption": "Figure 5: Parameter regularization method.", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/parameter_regularization_20240617.jpg" | |
| }, | |
| "6": { | |
| "figure_path": "2310.08459v3_figure_6.png", | |
| "caption": "Figure 6: Parameter tuning method.", | |
| "url": "http://arxiv.org/html/2310.08459v3/extracted/5738684/images/Para_tuning.jpg" | |
| }, | |
| "7": { | |
| "figure_path": "2310.08459v3_figure_7.png", | |
| "caption": "Figure 7: Heterogeneity in application scenarios", | |
| "url": "http://arxiv.org/html/2310.08459v3/x1.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [ | |
| { | |
| "1": { | |
| "title": "doi:10.1017/9781139061773.", | |
| "author": "Q. Yang, Y. Zhang, W. Dai, S. J. Pan, Transfer Learning, Cambridge University\nPress, 2020.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1017/9781139061773" | |
| } | |
| }, | |
| { | |
| "2": { | |
| "title": "doi:10.1109/TKDE.2009.191.", | |
| "author": "S. J. Pan, Q. Yang, A survey on transfer learning, IEEE Transactions on\nKnowledge and Data Engineering 22 (10) (2010) 1345\u20131359.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TKDE.2009.191" | |
| } | |
| }, | |
| { | |
| "3": { | |
| "title": "doi:10.1109/JPROC.2020.3004555.", | |
| "author": "F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, Q. He, A\ncomprehensive survey on transfer learning, Proceedings of the IEEE 109 (1)\n(2021) 43\u201376.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/JPROC.2020.3004555" | |
| } | |
| }, | |
| { | |
| "4": { | |
| "title": "doi:10.1109/TAI.2021.3054609.", | |
| "author": "S. Niu, Y. Liu, J. Wang, H. Song, A decade survey of transfer learning\n(2010\u20132020), IEEE Transactions on Artificial Intelligence 1 (2) (2020)\n151\u2013166.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TAI.2021.3054609" | |
| } | |
| }, | |
| { | |
| "5": { | |
| "title": "doi:10.1186/s40537-016-0043-6.", | |
| "author": "K. Weiss, T. M. Khoshgoftaar, D. Wang, A survey of transfer learning, Journal\nof Big data 3 (1) (2016) 1\u201340.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1186/s40537-016-0043-6" | |
| } | |
| }, | |
| { | |
| "6": { | |
| "title": "doi:10.1109/CVPR.2016.549.", | |
| "author": "Y.-H. H. Tsai, Y.-R. Yeh, Y.-C. F. Wang, Learning cross-domain landmarks for\nheterogeneous domain adaptation, in: Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, 2016, pp. 5081\u20135090.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/CVPR.2016.549" | |
| } | |
| }, | |
| { | |
| "7": { | |
| "title": "doi:10.1016/j.artint.2018.11.004.", | |
| "author": "S. Sukhija, N. C. Krishnan, G. Singh, Supervised heterogeneous domain\nadaptation via random forests., in: Proceedings of the Twenty-Fifth\nInternational Joint Conference on Artificial Intelligence (IJCAI-16), 2016,\npp. 2039\u20132045.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.artint.2018.11.004" | |
| } | |
| }, | |
| { | |
| "8": { | |
| "title": "doi:10.1109/ICDM.2010.65.", | |
| "author": "X. Shi, Q. Liu, W. Fan, S. Y. Philip, R. Zhu, Transfer learning on heterogenous\nfeature spaces via spectral transformation, in: 2010 IEEE International\nConference on Data Mining, IEEE, 2010, pp. 1049\u20131054.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ICDM.2010.65" | |
| } | |
| }, | |
| { | |
| "9": { | |
| "title": "doi:10.1109/TNNLS.2022.3183326.", | |
| "author": "L. Zhang, X. Gao, Transfer adaptation learning: A decade survey, IEEE\nTransactions on Neural Networks and Learning Systems (2022).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2022.3183326" | |
| } | |
| }, | |
| { | |
| "10": { | |
| "title": "doi:10.1007/978-981-15-5345-5_13.", | |
| "author": "N. Agarwal, A. Sondhi, K. Chopra, G. Singh, Transfer learning: Survey and\nclassification, in: S. Tiwari, M. C. Trivedi, K. K. Mishra, A. Misra, K. K.\nKumar, E. Suryani (Eds.), Smart Innovations in Communication and\nComputational Sciences, Springer Singapore, Singapore, 2021, pp. 145\u2013155.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/978-981-15-5345-5_13" | |
| } | |
| }, | |
| { | |
| "11": { | |
| "title": "doi:10.1007/978-3-030-01424-7_27.", | |
| "author": "C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, C. Liu, A survey on deep transfer\nlearning, in: Artificial Neural Networks and Machine Learning\u2013ICANN 2018:\n27th International Conference on Artificial Neural Networks, Rhodes, Greece,\nOctober 4-7, 2018, Proceedings, Part III 27, Springer, 2018, pp. 270\u2013279.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/978-3-030-01424-7_27" | |
| } | |
| }, | |
| { | |
| "12": { | |
| "title": "doi:10.3390/technologies11020040.", | |
| "author": "M. Iman, H. R. Arabnia, K. Rasheed, A review of deep transfer learning and\nrecent advancements, Technologies 11 (2) (2023) 40.", | |
| "venue": null, | |
| "url": "https://doi.org/10.3390/technologies11020040" | |
| } | |
| }, | |
| { | |
| "13": { | |
| "title": "doi:10.1109/ICCT46805.2019.8947072.", | |
| "author": "H. Liang, W. Fu, F. Yi, A survey of recent advances in transfer learning, in:\n2019 IEEE 19th International Conference on Communication Technology (ICCT),\nIEEE, 2019, pp. 1516\u20131523.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ICCT46805.2019.8947072" | |
| } | |
| }, | |
| { | |
| "14": { | |
| "title": "doi:10.48550/arXiv.2010.15561.", | |
| "author": "S. Saha, T. Ahmad, Federated transfer learning: Concept and applications,\nIntelligenza Artificiale 15 (1) (2021) 35\u201344.", | |
| "venue": null, | |
| "url": "https://doi.org/10.48550/arXiv.2010.15561" | |
| } | |
| }, | |
| { | |
| "15": { | |
| "title": "doi:10.1007/978-3-031-11748-0_3.", | |
| "author": "E. Hallaji, R. Razavi-Far, M. Saif, Federated and transfer learning: A survey\non adversaries and defense mechanisms, in: Federated and Transfer Learning,\nSpringer, 2022, pp. 29\u201355.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/978-3-031-11748-0_3" | |
| } | |
| }, | |
| { | |
| "16": { | |
| "title": "doi:10.1109/TNNLS.2014.2330900.", | |
| "author": "L. Shao, F. Zhu, X. Li, Transfer learning for visual categorization: A survey,\nIEEE Transactions on Neural Networks and Learning Systems 26 (5) (2014)\n1019\u20131034.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2014.2330900" | |
| } | |
| }, | |
| { | |
| "17": { | |
| "title": "doi:10.1109/MSP.2014.2347059.", | |
| "author": "V. M. Patel, R. Gopalan, R. Li, R. Chellappa, Visual domain adaptation: A\nsurvey of recent advances, IEEE Signal Processing Magazine 32 (3) (2015)\n53\u201369.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/MSP.2014.2347059" | |
| } | |
| }, | |
| { | |
| "18": { | |
| "title": "doi:10.1016/j.neucom.2018.05.083.", | |
| "author": "M. Wang, W. Deng, Deep visual domain adaptation: A survey, Neurocomputing 312\n(2018) 135\u2013153.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.neucom.2018.05.083" | |
| } | |
| }, | |
| { | |
| "19": { | |
| "title": "doi:10.1007/s10115-013-0665-3.", | |
| "author": "D. Cook, K. D. Feuz, N. C. Krishnan, Transfer learning for activity\nrecognition: A survey, Knowledge and information systems 36 (2013) 537\u2013556.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/s10115-013-0665-3" | |
| } | |
| }, | |
| { | |
| "20": { | |
| "title": "doi:10.48550/arXiv.2007.04239.", | |
| "author": "Z. Alyafeai, M. S. AlShaibani, I. Ahmad, A survey on transfer learning in\nnatural language processing, arXiv preprint arXiv:2007.04239 (2020).", | |
| "venue": null, | |
| "url": "https://doi.org/10.48550/arXiv.2007.04239" | |
| } | |
| }, | |
| { | |
| "21": { | |
| "title": "doi:10.1109/ACCESS.2019.2925059.", | |
| "author": "R. Liu, Y. Shi, C. Ji, M. Jia, A survey of sentiment analysis based on transfer\nlearning, IEEE Access 7 (2019) 85401\u201385412.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ACCESS.2019.2925059" | |
| } | |
| }, | |
| { | |
| "22": { | |
| "title": "doi:10.18653/v1/N19-5004.", | |
| "author": "S. Ruder, M. E. Peters, S. Swayamdipta, T. Wolf, Transfer learning in natural\nlanguage processing, in: Proceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational Linguistics:\nTutorials, 2019, pp. 15\u201318.", | |
| "venue": null, | |
| "url": "https://doi.org/10.18653/v1/N19-5004" | |
| } | |
| }, | |
| { | |
| "23": { | |
| "title": "doi:10.1016/j.neucom.2021.08.159.", | |
| "author": "X. Yu, J. Wang, Q.-Q. Hong, R. Teku, S.-H. Wang, Y.-D. Zhang, Transfer learning\nfor medical images analyses: A survey, Neurocomputing 489 (2022) 230\u2013254.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.neucom.2021.08.159" | |
| } | |
| }, | |
| { | |
| "24": { | |
| "title": "doi:10.1016/j.sysarc.2020.101830.", | |
| "author": "A. Sufian, A. Ghosh, A. S. Sadiq, F. Smarandache, A survey on deep transfer\nlearning to edge computing for mitigating the COVID-19 pandemic, Journal of\nSystems Architecture 108 (2020) 101830.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.sysarc.2020.101830" | |
| } | |
| }, | |
| { | |
| "25": { | |
| "title": "doi:10.48550/arXiv.2102.07572.", | |
| "author": "C. T. Nguyen, N. Van Huynh, N. H. Chu, Y. M. Saputra, D. T. Hoang, D. N.\nNguyen, Q.-V. Pham, D. Niyato, E. Dutkiewicz, W.-J. Hwang, Transfer learning\nfor future wireless networks: A comprehensive survey, arXiv preprint\narXiv:2102.07572 (2021).", | |
| "venue": null, | |
| "url": "https://doi.org/10.48550/arXiv.2102.07572" | |
| } | |
| }, | |
| { | |
| "26": { | |
| "title": "doi:10.3390/s22041416.", | |
| "author": "L. J. Wong, A. J. Michaels, Transfer learning for radio frequency machine\nlearning: A taxonomy and survey, Sensors 22 (4) (2022) 1416.", | |
| "venue": null, | |
| "url": "https://doi.org/10.3390/s22041416" | |
| } | |
| }, | |
| { | |
| "27": { | |
| "title": "doi:10.1186/s40537-017-0089-0.", | |
| "author": "O. Day, T. M. Khoshgoftaar, A survey on heterogeneous transfer learning,\nJournal of Big Data 4 (2017) 1\u201342.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1186/s40537-017-0089-0" | |
| } | |
| }, | |
| { | |
| "28": { | |
| "title": "doi:10.5220/0006396700170027.", | |
| "author": "M. Friedjungov\u00e1, M. Jirina, Asymmetric heterogeneous transfer learning: A\nsurvey., in: Proceedings of the 6th International Conference on Data Science,\nTechnology and Applications (DATA 2017), 2017, pp. 17\u201327.", | |
| "venue": null, | |
| "url": "https://doi.org/10.5220/0006396700170027" | |
| } | |
| }, | |
| { | |
| "29": { | |
| "title": "doi:10.1007/s11042-024-18352-3.", | |
| "author": "S. Khan, P. Yin, Y. Guo, M. Asim, A. Abd El-Latif, Heterogeneous transfer\nlearning: recent developments, applications, and challenges, Multimedia Tools\nand Applications (2024).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/s11042-024-18352-3" | |
| } | |
| }, | |
| { | |
| "30": { | |
| "title": "doi:10.48550/arXiv.1810.04805.", | |
| "author": "J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep\nbidirectional transformers for language understanding, arXiv preprint\narXiv:1810.04805 (2018).", | |
| "venue": null, | |
| "url": "https://doi.org/10.48550/arXiv.1810.04805" | |
| } | |
| }, | |
| { | |
| "31": { | |
| "title": "doi:10.1109/TNNLS.2020.3029181.", | |
| "author": "L. Zhen, P. Hu, X. Peng, R. S. M. Goh, J. T. Zhou, Deep multimodal transfer\nlearning for cross-modal retrieval, IEEE Transactions on Neural Networks and\nLearning Systems 33 (2) (2020) 798\u2013810.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2020.3029181" | |
| } | |
| }, | |
| { | |
| "32": { | |
| "title": "doi:10.1109/TPAMI.2022.3146234.", | |
| "author": "Z. Fang, J. Lu, F. Liu, G. Zhang, Semi-supervised heterogeneous domain\nadaptation: Theory and algorithms, IEEE Transactions on Pattern Analysis and\nMachine Intelligence 45 (1) (2023) 1087\u20131105.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2022.3146234" | |
| } | |
| }, | |
| { | |
| "33": { | |
| "title": "doi:10.1145/3241055.", | |
| "author": "L. Zhao, Z. Chen, L. T. Yang, M. J. Deen, Z. J. Wang, Deep semantic mapping for\nheterogeneous multimedia transfer learning using co-occurrence data, ACM\nTrans. Multimedia Comput. Commun. Appl. 15 (1, S) (FEB 2019).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1145/3241055" | |
| } | |
| }, | |
| { | |
| "34": { | |
| "title": "doi:10.1109/TNNLS.2017.2751102.", | |
| "author": "Y. Yan, Q. Wu, M. Tan, M. K. Ng, H. Min, I. W. Tsang, Online heterogeneous\ntransfer by hedge ensemble of offline and online decisions, IEEE Transactions\non Neural Networks and Learning Systems 29 (7) (2018) 3252\u20133263.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2017.2751102" | |
| } | |
| }, | |
| { | |
| "35": { | |
| "title": "doi:10.1007/978-3-642-15561-1_16.", | |
| "author": "K. Saenko, B. Kulis, M. Fritz, T. Darrell, Adapting visual category models to\nnew domains, in: Computer Vision\u2013ECCV 2010: 11th European Conference on\nComputer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings,\nPart IV 11, Springer, 2010, pp. 213\u2013226.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/978-3-642-15561-1_16" | |
| } | |
| }, | |
| { | |
| "36": { | |
| "title": "doi:10.1109/CVPR.2011.5995702.", | |
| "author": "B. Kulis, K. Saenko, T. Darrell, What you saw is not what you get: Domain\nadaptation using asymmetric kernel transforms, in: Proceedings of the 2011\nIEEE Conference on Computer Vision and Pattern Recognition, 2011, pp.\n1785\u20131792.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/CVPR.2011.5995702" | |
| } | |
| }, | |
| { | |
| "37": { | |
| "title": "doi:10.1109/TPAMI.2018.2866846.", | |
| "author": "L. Li, Z. Zhang, Semi-supervised domain adaptation by covariance matching, IEEE\nTransactions on Pattern Analysis and Machine Intelligence 41 (11) (2019)\n2724\u20132739.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2018.2866846" | |
| } | |
| }, | |
| { | |
| "38": { | |
| "title": "doi:10.1109/TIP.2021.3094137.", | |
| "author": "H. Wu, H. Zhu, Y. Yan, J. Wu, Y. Zhang, M. K. Ng, Heterogeneous domain\nadaptation by information capturing and distribution matching, IEEE\nTransactions on Image Processing 30 (2021) 6364\u20136376.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIP.2021.3094137" | |
| } | |
| }, | |
| { | |
| "39": { | |
| "title": "doi:10.1109/TNNLS.2018.2868854.", | |
| "author": "J. Li, K. Lu, Z. Huang, L. Zhu, H. T. Shen, Heterogeneous domain adaptation\nthrough progressive alignment, IEEE Transactions on Neural Networks and\nLearning Systems 30 (5) (2018) 1381\u20131391.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2018.2868854" | |
| } | |
| }, | |
| { | |
| "40": { | |
| "title": "doi:10.1109/TPAMI.2022.3163338.", | |
| "author": "M. Ebrahimi, Y. Chai, H. H. Zhang, H. Chen, Heterogeneous domain adaptation\nwith adversarial neural representation learning: Experiments on e-commerce\nand cybersecurity, IEEE Transactions on Pattern Analysis and Machine\nIntelligence 45 (2) (2023) 1862\u20131875.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2022.3163338" | |
| } | |
| }, | |
| { | |
| "41": { | |
| "title": "doi:10.1145/2629528.", | |
| "author": "K. D. Feuz, D. J. Cook, Transfer learning across feature-rich heterogeneous\nfeature spaces via feature-space remapping (FSR), ACM Trans Intell Syst\nTechnol 6 (1) (APR 2015).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1145/2629528" | |
| } | |
| }, | |
| { | |
| "42": { | |
| "title": "doi:10.1109/TPAMI.2014.2343216.", | |
| "author": "M. Xiao, Y. Guo, Feature space independent semi-supervised domain adaptation\nvia kernel matching, IEEE Transactions on Pattern Analysis and Machine\nIntelligence 37 (1) (2015) 54\u201366.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2014.2343216" | |
| } | |
| }, | |
| { | |
| "43": { | |
| "title": "doi:10.1109/TPAMI.2013.167.", | |
| "author": "W. Li, L. Duan, D. Xu, I. W. Tsang, Learning with augmented features for\nsupervised and semi-supervised heterogeneous domain adaptation, IEEE\nTransactions on Pattern Analysis and Machine Intelligence 36 (6) (2013)\n1134\u20131148.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2013.167" | |
| } | |
| }, | |
| { | |
| "44": { | |
| "title": "doi:10.1109/TIP.2019.2917867.", | |
| "author": "F. Yu, X. Wu, J. Chen, L. Duan, Exploiting images for video recognition:\nheterogeneous feature augmentation via symmetric adversarial learning, IEEE\nTransactions on Image Processing 28 (11) (2019) 5308\u20135321.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIP.2019.2917867" | |
| } | |
| }, | |
| { | |
| "45": { | |
| "title": "doi:10.1609/aaai.v30i1.10211.", | |
| "author": "J. Zhou, S. Pan, I. Tsang, S.-S. Ho, Transfer learning for cross-language text\ncategorization through active correspondences construction, Proceedings of\nthe AAAI Conference on Artificial Intelligence 30 (1) (Mar. 2016).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1609/aaai.v30i1.10211" | |
| } | |
| }, | |
| { | |
| "46": { | |
| "title": "doi:10.1109/TNNLS.2019.2913723.", | |
| "author": "H. Li, S. J. Pan, S. Wang, A. C. Kot, Heterogeneous domain adaptation via\nnonlinear matrix factorization, IEEE Transactions on Neural Networks and\nLearning Systems 31 (3) (2019) 984\u2013996.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2019.2913723" | |
| } | |
| }, | |
| { | |
| "47": { | |
| "title": "doi:10.1609/aaai.v33i01.33018602.", | |
| "author": "H. Li, S. J. Pan, R. Wan, A. C. Kot, Heterogeneous transfer learning via deep\nmatrix completion with adversarial kernel embedding, Proceedings of the AAAI\nConference on Artificial Intelligence 33 (01) (2019) 8602\u20138609.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1609/aaai.v33i01.33018602" | |
| } | |
| }, | |
| { | |
| "48": { | |
| "title": "doi:10.1109/TPAMI.2020.2994749.", | |
| "author": "H.-J. Ye, D.-C. Zhan, Y. Jiang, Z.-H. Zhou, Heterogeneous few-shot model\nrectification with semantic mapping, IEEE Transactions on Pattern Analysis\nand Machine Intelligence 43 (11) (2021) 3878\u20133891.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2020.2994749" | |
| } | |
| }, | |
| { | |
| "49": { | |
| "title": "arXiv:2303.08774.", | |
| "author": "OpenAI, GPT-4 technical report (2023).", | |
| "venue": null, | |
| "url": "http://arxiv.org/abs/2303.08774" | |
| } | |
| }, | |
| { | |
| "50": { | |
| "title": "doi:https://doi.org/10.1006/jcss.1997.1504.", | |
| "author": "Y. Freund, R. E. Schapire, A decision-theoretic generalization of online\nlearning and an application to boosting, Journal of Computer and System\nSciences 55 (1) (1997) 119\u2013139.", | |
| "venue": null, | |
| "url": "https://doi.org/https://doi.org/10.1006/jcss.1997.1504" | |
| } | |
| }, | |
| { | |
| "51": { | |
| "title": "doi:10.1016/j.asoc.2019.105819.", | |
| "author": "P. Zhao, H. Gao, Y. Lu, T. Wu, A cross-media heterogeneous transfer learning\nfor preventing over-adaption, Applied Soft Computing 85 (DEC 2019).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.asoc.2019.105819" | |
| } | |
| }, | |
| { | |
| "52": { | |
| "title": "doi:10.1080/14786440109462720.", | |
| "author": "K. P. F.R.S., LIII. on lines and planes of closest fit to systems of points\nin space, The London, Edinburgh, and Dublin Philosophical Magazine and\nJournal of Science 2 (11) (1901) 559\u2013572.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1080/14786440109462720" | |
| } | |
| }, | |
| { | |
| "53": { | |
| "title": "doi:10.1109/TIP.2019.2912126.", | |
| "author": "W.-Y. Chen, T.-M. H. Hsu, Y.-H. H. Tsai, M.-S. C. F. Ieee, Y.-C. F. Wang,\nTransfer neural trees: Semi-supervised heterogeneous domain adaptation and\nbeyond, IEEE Transactions on Image Processing 28 (9) (2019) 4620\u20134633.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIP.2019.2912126" | |
| } | |
| }, | |
| { | |
| "54": { | |
| "title": "doi:10.1007/s11263-015-0816-y.", | |
| "author": "O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,\nA. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, ImageNet\nlarge scale visual recognition challenge, International Journal of Computer\nVision (IJCV) 115 (3) (2015) 211\u2013252.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/s11263-015-0816-y" | |
| } | |
| }, | |
| { | |
| "55": { | |
| "title": "doi:10.1109/MIS.2013.32.", | |
| "author": "Q. Wu, M. K. Ng, Y. Ye, Cotransfer learning using coupled Markov chains with\nrestart, IEEE Intelligent Systems 29 (4) (2013) 26\u201333.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/MIS.2013.32" | |
| } | |
| }, | |
| { | |
| "56": { | |
| "title": "doi:10.1109/TNNLS.2021.3105868.", | |
| "author": "Y. Yao, X. Li, Y. Zhang, Y. Ye, Multisource heterogeneous domain adaptation\nwith conditional weighting adversarial network, IEEE Transactions on Neural\nNetworks and Learning Systems (2021).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TNNLS.2021.3105868" | |
| } | |
| }, | |
| { | |
| "57": { | |
| "title": "doi:10.1109/TKDE.2017.2685597.", | |
| "author": "Q. Wu, H. Wu, X. Zhou, M. Tan, Y. Xu, Y. Yan, T. Hao, Online transfer learning\nwith multiple homogeneous or heterogeneous sources, IEEE Transactions on\nKnowledge and Data Engineering 29 (7) (2017) 1494\u20131507.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TKDE.2017.2685597" | |
| } | |
| }, | |
| { | |
| "58": { | |
| "title": "doi:10.1016/j.patrec.2018.02.011.", | |
| "author": "W.-C. Fang, Y.-T. Chiang, A discriminative feature mapping approach to\nheterogeneous domain adaptation, Pattern Recognition Letters 106 (2018)\n13\u201319.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.patrec.2018.02.011" | |
| } | |
| }, | |
| { | |
| "59": { | |
| "title": "doi:10.1109/TCYB.2019.2957033.", | |
| "author": "C.-X. Ren, J. Feng, D.-Q. Dai, S. Yan, Heterogeneous domain adaptation via\ncovariance structured feature translators, IEEE Transactions on Cybernetics\n51 (4) (2021) 2166\u20132177.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TCYB.2019.2957033" | |
| } | |
| }, | |
| { | |
| "60": { | |
| "title": "doi:10.1007/s10489-021-02756-x.", | |
| "author": "N. Alipour, J. Tahmoresnezhad, Heterogeneous domain adaptation with statistical\ndistribution alignment and progressive pseudo label selection, Applied\nIntelligence 52 (7) (2022) 8038\u20138055.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/s10489-021-02756-x" | |
| } | |
| }, | |
| { | |
| "61": { | |
| "title": "doi:10.1145/3464324.", | |
| "author": "S. Niu, Y. Jiang, B. Chen, J. Wang, Y. Liu, H. Song, Cross-modality transfer\nlearning for image-text information management, ACM Transactions on\nManagement Information System (TMIS) 13 (1) (MAR 2022).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1145/3464324" | |
| } | |
| }, | |
| { | |
| "62": { | |
| "title": "doi:10.1109/TIP.2015.2431440.", | |
| "author": "S. Shekhar, V. M. Patel, H. V. Nguyen, R. Chellappa, Coupled projections for\nadaptation of dictionaries, IEEE Transactions on Image Processing 24 (10)\n(OCT 2015).", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIP.2015.2431440" | |
| } | |
| }, | |
| { | |
| "63": { | |
| "title": "doi:10.1016/j.patcog.2016.03.009.", | |
| "author": "A. S. Mozafari, M. Jamzad, A SVM-based model-transferring method for\nheterogeneous domain adaptation, Pattern Recognition 56 (2016) 142\u2013158.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1016/j.patcog.2016.03.009" | |
| } | |
| }, | |
| { | |
| "64": { | |
| "title": "doi:10.3390/app10165631.", | |
| "author": "A. Magotra, J. Kim, Improvement of heterogeneous transfer learning efficiency\nby using Hebbian learning principle, Applied Sciences 10 (16) (AUG 2020).", | |
| "venue": null, | |
| "url": "https://doi.org/10.3390/app10165631" | |
| } | |
| }, | |
| { | |
| "65": { | |
| "title": "doi:10.1109/TCSVT.2019.2942688.", | |
| "author": "Y. Su, Y. Li, W. Nie, D. Song, A.-A. Liu, Joint heterogeneous feature learning\nand distribution alignment for 2D image-based 3D object retrieval, IEEE\nTransactions on Circuits and Systems for Video Technology 30 (10) (2020)\n3765\u20133776.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TCSVT.2019.2942688" | |
| } | |
| }, | |
| { | |
| "66": { | |
| "title": "doi:10.1109/TIFS.2018.2885284.", | |
| "author": "T. d. F. Pereira, A. Anjos, S. Marcel, Heterogeneous face recognition using\ndomain specific units, IEEE Transactions on Information Forensics and\nSecurity 14 (7) (2019) 1803\u20131816.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIFS.2018.2885284" | |
| } | |
| }, | |
| { | |
| "67": { | |
| "title": "doi:10.1109/ACCESS.2020.3038906.", | |
| "author": "S. Yang, K. Fu, X. Yang, Y. Lin, J. Zhang, C. Peng, Learning domain-invariant\ndiscriminative features for heterogeneous face recognition, IEEE Access 8\n(2020) 209790\u2013209801.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ACCESS.2020.3038906" | |
| } | |
| }, | |
| { | |
| "68": { | |
| "title": "doi:10.1109/ICHI57859.2023.00028.", | |
| "author": "Y. Ji, Y. Gao, R. Bao, Q. Li, D. Liu, Y. Sun, Y. Ye, Prediction of covid-19\npatients\u2019 emergency room revisit using multi-source transfer learning, in:\n2023 IEEE 11th International Conference on Healthcare Informatics (ICHI),\nIEEE Computer Society, 2023, pp. 138\u2013144.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ICHI57859.2023.00028" | |
| } | |
| }, | |
| { | |
| "69": { | |
| "title": "doi:10.1109/34.824819.", | |
| "author": "A. K. Jain, R. P. W. Duin, J. Mao, Statistical pattern recognition: A review,\nIEEE Transactions on Pattern Analysis and Machine Intelligence 22 (1) (2000)\n4\u201337.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/34.824819" | |
| } | |
| }, | |
| { | |
| "70": { | |
| "title": "doi:10.1109/BTAS.2015.7358780.", | |
| "author": "J. Bernhard, J. Barr, K. W. Bowyer, P. Flynn, Near-IR to visible light face\nmatching: Effectiveness of pre-processing options for commercial matchers,\nin: 2015 IEEE 7th International Conference on Biometrics Theory, Applications\nand Systems (BTAS), IEEE, 2015, pp. 1\u20138.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/BTAS.2015.7358780" | |
| } | |
| }, | |
| { | |
| "71": { | |
| "title": "doi:10.1109/TIP.2015.2465157.", | |
| "author": "L. Yang, L. Jing, M. K. Ng, Robust and non-negative collective matrix\nfactorization for text-to-image transfer learning, IEEE Transactions on Image\nProcessing 24 (12) (2015) 4701\u20134714.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TIP.2015.2465157" | |
| } | |
| }, | |
| { | |
| "72": { | |
| "title": "doi:10.1109/TPAMI.2013.142.", | |
| "author": "J. C. Pereira, E. Coviello, G. Doyle, N. Rasiwasia, G. R. Lanckriet, R. Levy,\nN. Vasconcelos, On the role of correlation and abstraction in cross-modal\nmultimedia retrieval, IEEE Transactions on Pattern Analysis and Machine\nIntelligence 36 (3) (2013) 521\u2013535.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/TPAMI.2013.142" | |
| } | |
| }, | |
| { | |
| "73": { | |
| "title": "doi:10.48550/arXiv.1503.02531.", | |
| "author": "G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network,\narXiv preprint arXiv:1503.02531 (2015).", | |
| "venue": null, | |
| "url": "https://doi.org/10.48550/arXiv.1503.02531" | |
| } | |
| }, | |
| { | |
| "74": { | |
| "title": "doi:10.1007/s11263-021-01453-z.", | |
| "author": "J. Gou, B. Yu, S. J. Maybank, D. Tao, Knowledge distillation: A survey,\nInternational Journal of Computer Vision 129 (2021) 1789\u20131819.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1007/s11263-021-01453-z" | |
| } | |
| }, | |
| { | |
| "75": { | |
| "title": "doi:10.1109/ICHI.2018.00095.", | |
| "author": "M. A. Ahmad, C. Eckert, A. Teredesai, Interpretable machine learning in\nhealthcare, in: Proceedings of the 2018 ACM International Conference on\nBioinformatics, Computational Biology, and Health Informatics, 2018, pp.\n559\u2013560.", | |
| "venue": null, | |
| "url": "https://doi.org/10.1109/ICHI.2018.00095" | |
| } | |
| } | |
| ], | |
| "url": "http://arxiv.org/html/2310.08459v3" | |
| } |