| { | |
| "title": "VAID: Indexing View Designs in Visual Analytics System", | |
| "abstract": "Visual analytics (VA) systems have been widely used in various application domains. However, VA systems are complex in design, which imposes a serious problem: although the academic community constantly designs and implements new designs, the designs are difficult to query, understand, and refer to by subsequent designers. To mark a major step forward in tackling this problem, we index VA designs in an expressive and accessible way, transforming the designs into a structured format. We first conducted a workshop study with VA designers to learn user requirements for understanding and retrieving professional designs in VA systems. Thereafter, we came up with an index structure VAID to describe advanced and composited visualization designs with comprehensive labels about their analytical tasks and visual designs. The usefulness of VAID was validated through user studies. Our work opens new perspectives for enhancing the accessibility and reusability of professional visualization designs.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "1. Introduction", | |
| "text": "Visual analytics (VA) combines data mining and visualization techniques to help users with data exploration in different domains, such as biology (Krueger et al., 2020 ###reference_b32###; Lekschas et al., 2018 ###reference_b37###), sports (Stein et al., 2018 ###reference_b61###; Cao et al., 2021 ###reference_b9###), urban (Deng et al., 2023b ###reference_b19###; Lu et al., 2023 ###reference_b42###), and explainable AI (Gou et al., 2021 ###reference_b23###; Ono et al., 2021 ###reference_b49###).\nResearchers in VA have developed advanced VA systems with highly customized visualization designs for obtaining insight into data (Sacha et al., 2014 ###reference_b54###).\nDesigning effective VA systems is highly challenging and demanding, requiring close collaboration between experienced visualization practitioners and domain experts.\nViews are basic building blocks of VA systems.\nTo create an effective VA system, it is critical to design views by mapping the data and tasks derived from domain problems to visual designs (Munzner, 2009 ###reference_b47###).\nRecent advance in visualization has attempted to automate such a mapping process (Srinivasan et al., 2018 ###reference_b60###; Deng et al., 2022 ###reference_b17###; Chen et al., 2021b ###reference_b14###).\nHowever, these studies recommend basic statistical charts (e.g., bar and line charts) for low-level analytical tasks such as finding distributions.\nThey can hardly support designing views in VA systems that deal with complex datasets and tasks (Keim et al., 2008a ###reference_b30###).\nThe existing VA design pipeline heavily relies on researchers\u2019 experience, requiring surveying related studies and summarizing design requirements for new scenarios.\nPassing examples can offer valuable inspiration due to the multitude of design styles in existence (Lee et al., 2010 ###reference_b36###; Herring et al., 2009 ###reference_b26###; Bigelow et al., 2014 ###reference_b4###).\nTo better support the designers and researchers of visual analytics, inspired by research in creativity support (Shneiderman, 2002 ###reference_b59###), we believe it is important to facilitate the ideation process by enabling the exploration of previous VA view design.\nHowever, without an effective indexing method, currently, these view designs can only be searched through simple keywords, such as domain problems, which cannot fulfill the requirements of VA designers.\nUnable to search with more fine-grained requirements, they struggle to draw inspiration from a large number of prior successful designs.\nTo address the challenge, we aim to propose an indexing approach for view designs in VA systems collectively considering tasks, data, and visualizations (Munzner, 2009 ###reference_b47###).\nHowever, it is unclear how to define an index structure based on these factors.\nFor example, from the visualization perspective, a VA design might contain a hybrid use of different visual elements, such as composite visualizations (Javed and Elmqvist, 2012 ###reference_b29###) and glyphs (Borgo et al., 2013 ###reference_b5###).\nWhen representing these complex designs with indexes, preserving all details can lead to difficulties for designers in specifying their searching criteria and understanding their structure and semantic meanings.\nOn the other hand, if the information is over-abstracted to a high-level description, such as a few keywords, designers may struggle to accurately express their design when searching for required design information from indexes of returned visual designs.\nIt is important to balance expressiveness and efficiency when designing the index structure.\nTherefore, to understand designers\u2019 requirements on the index structure, we conducted a workshop study with 12 VA designers, most of whom have published papers in the IEEE VAST conference as the first author.\nWith the study, we validated the necessity of indexing past designs for creating new ones and collected the requirements for constructing such an index.\nBased on the user feedback from the workshop study, we formulated an index structure named VAID for VA designs inspired by Vega-Lite (Satyanarayan et al., 2017 ###reference_b55###).\nVAID enables an expressive characterization of visualizations with analytical tasks and visual designs.\nTo ensure the coverage and comprehensiveness of the index, we iteratively labeled the view designs and refined the keys and values of the index structure.\nAs a result, we gained 442 view designs from 124 VA systems and formed an informative index structure for them.\nTo demonstrate the usefulness of VAID, we conducted a user study using a prototype for the exploration of VAID with 12 participants.\nSpecifically, we asked participants to query designs for specific analytical tasks, data types, mark types, etc.\nUser feedback showed that VAID could help them query diverse and useful visual designs, thereby aiding their design exploration.\nLeveraging the usefulness of VAID in presenting view designs, we proceeded to conduct an in-depth analysis and obtained findings into the patterns of VA view designs.\nFinally, we concluded our research by discussing future directions and limitations.\nThe contributions of this paper include:\nrequirements for understanding and indexing views in VA derived from a workshop study;\nan effective index structure VAID for VA designs (including analytical tasks and visual designs);\na user study based on an exploration prototype111https://VIS-VAID.github.io/ ###reference_VIS-VAID.github.io/### to demonstrate the usefulness of VAID;\nan in-depth analysis of existing view designs and research opportunities based on VAID." | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "2. Related Work", | |
| "text": "This paper is related to studies about visualization indexing, visual analytics design studies, and visualization typologies." | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "2.1. Visualization Indexing", | |
| "text": "Given that visualizations usually have complex structures of visual components, numerous studies investigate the indexing and searching of visualizations.\nOne way for indexing is by assigning tags to the visualizations.\nMany visualization datasets collect visualizations and categorize them by types, such as MASSVIS (Borkin et al., 2013 ###reference_b6###), VizNet (Hu et al., 2019 ###reference_b28###), VIS30K (Chen et al., 2021a ###reference_b13###), VisImages (Deng et al., 2023c ###reference_b18###), and Many Eyes (Viegas et al., 2007 ###reference_b63###).\nTagging visualizations is useful for machine learning model training, but the tags have limitations when it comes to analyzing visualization configurations.\nImportant configurations like visual encodings, compositions, and associated tasks are crucial for comprehending the designs of visual analytics, and these aspects go beyond the scope of traditional tags.\nComputational methods have been used to extract and index visualizations.\nFor example, when retrieving SVG charts, to ensure both the similarity of visual structures and data distributions, Li et al. (Li et al., 2022 ###reference_b39###) proposed a method based on graph neural networks for feature modeling.\nHoque et al. (Hoque and Agrawala, 2020 ###reference_b27###) collected visualizations that are implemented by D3.js and parsed the hierarchical structures of the visualizations.\nHowever, parsing and analyzing bitmap charts is a more challenging task compared to SVG charts.\nA series of methods adopt computer vision methods to reverse-engineering visualizations (Savva et al., 2011 ###reference_b56###; Poco and Heer, 2017 ###reference_b51###; Ying et al., 2023b ###reference_b70###; Zhou et al., 2023 ###reference_b72###) or extracting numerical representations for charts indexing (Ye et al., 2022 ###reference_b67###).\nThough effective, these methods might not be applicable to the charts in visualization publications, which usually have complex layouts and composite designs.\nIn this work, we focus on analyzing visualizations in the context of visual analytics, which poses higher requirements for data labeling. Specifically, it not only requires labeling the meta information like chart positions but also the information related to visualization literacy (e.g., visual encodings and tasks).\nOur efforts form a valuable index structure of visualization designs from state-of-the-art VA systems." | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "2.2. Visualization Design in Visual Analytics", | |
| "text": "Since the analysis problems and data structures are getting more complex in recent years, visual analytics (VA) systems are equipped with more features to fulfill the analytical requirements.\nTherefore, researchers reflected on the scope and challenges of VA (Keim et al., 2008b ###reference_b31###; Kui et al., 2022 ###reference_b33###) and proposed a series of conceptual models.\nFor example, Sacha et al. (Sacha et al., 2014 ###reference_b54###) have proposed a knowledge generation model to characterize VA systems and their use in sensemaking.\nAccording to the model, VA systems should be well integrated into the human knowledge generation loop from hypothesis and action to derive the findings and insights.\nMoreover, they think that the VA system is composed of three components, namely, data, visualization, and algorithms, involving the pipeline of information visualization and the process of knowledge discovery and data mining.\nTo design visualizations that are compatible with the knowledge generation pipeline (Sacha et al., 2014 ###reference_b54###), Munzner (Munzner, 2009 ###reference_b47###) has proposed a nested model for visualization design and evaluation.\nThe nested model consists of four stages: 1) domain problem and data characterization, 2) operation and data type abstraction, 3) visualization design, and 4) algorithm design.\nThe first two stages are considered different levels of abstraction of the data and tasks.\nWith the abstracted data and tasks, the design choices of visualizations can be further derived based on the theories in information visualization, such as expressiveness and effectiveness criteria (Mackinlay, 1986 ###reference_b44###) and the rules of visual mapping (Card et al., 1999 ###reference_b11###; Munzner, 2014 ###reference_b48###).\nThe nested model provides prescriptive guidance for visualization experts in constructing VA systems.\nInspired by the model, we construct an index structure of visual analytics describing visualizations from their analytical tasks and visual designs.\nCompared to conceptual models, the structure we present is a unique contribution to the community for data-driven analysis and design inspiration to promote research on VA systems." | |
| }, | |
| { | |
| "section_id": "2.3", | |
| "parent_section_id": "2", | |
| "section_name": "2.3. Visualization Taxonomy and Grammars", | |
| "text": "The classification of visualizations (Harris, 1999 ###reference_b24###; Chi, 2000 ###reference_b15###; Lohse et al., 1994 ###reference_b41###; Engelhardt and Richards, 2018 ###reference_b21###; Meirelles, 2013 ###reference_b45###) has been studied for a long time.\nFor example, Borkin et al. (Borkin et al., 2013 ###reference_b6###) classified visualizations into 12 categories, such as Area, Bar and Circle, each comprising multiple sub-types.\nHowever, the designs of visualizations for visual analytics usually have novel layouts and complex compositions.\nChen et al. (Chen et al., 2021b ###reference_b14###) have attempted to map each view in VA systems to a specific visualization category in Borkin\u2019s taxonomy.\nHowever, they discovered that a view might be ambiguous to a specific category because many designs contain various visual components of multiple categories.\nThey reflected on the categorization and proposed to follow Javed et al.\u2019s theory of composite visualization (Javed and Elmqvist, 2012 ###reference_b29###) to characterize the visual designs in further research.\nBased on this reflection, we regard the visualizations in VA systems as composite visualizations and characterize the relations between the components with a hierarchical specification.\nGrammars of graphics (Wilkinson, 2012 ###reference_b64###) are fundamental in visualization systems, indicating the visual mappings from data to visual channels and layouts.\nMackinlay (Mackinlay, 1986 ###reference_b44###) formulated visualizations as a graphical presentation problem and adopted relation tuples to specify the data features and visual encodings.\nHeer et al. (Heer and Bostock, 2010 ###reference_b25###) proposed using declarative languages to describe and specify the visualizations, which is intuitive for the programmers.\nAfter that, Bostock et al. (Bostock et al., 2011 ###reference_b7###) delivered D3, a programming grammar to operate on the graphical elements of document object model pages.\nTo further reduce the burden of visualization specification, Satyanarayan et al. (Satyanarayan et al., 2017 ###reference_b55###) proposed Vega-Lite, a JSON-based declarative programming language, by which users can render a visualization with even several lines of JSON text.\nAfter that, specifying visualization using JSON files becomes widely used, and similar languages are evolving, such as ECharts (Li et al., 2018 ###reference_b38###).\nIn this paper, we refer to Vega-Lite and extend its style to support the indexing of view designs in VA systems." | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "3. Preliminary Study", | |
| "text": "We conducted a workshop study with VA designers to\n1) understand whether reviewing state-of-the-art visual designs can help visualization designers in design inspiration and\n2) obtain the requirements for understanding and indexing VA view designs." | |
| }, | |
| { | |
| "section_id": "3.1", | |
| "parent_section_id": "3", | |
| "section_name": "3.1. Data Preparation.", | |
| "text": "Before the study, we first prepared the state-of-the-art VA designs and derived an initial index design based on tasks, data, and visualizations (Sacha et al., 2014 ###reference_b54###).\nCollecting Figures.\nWe first collected the figures of VA designs based on VisImages (Deng et al., 2023c ###reference_b18###), which consists of bitmap images collected from IEEE InfoVis and VAST, the top venues for visualization and visual analytics.\nWe chose the papers in the IEEE VAST from 2016 to 2020, the primary venue for VA research (253).\nThen we identified the papers that propose visual analytics systems, whose paper types are commonly referred to as applications or design studies (124).\nFor each paper, we selected one figure containing the complete system interface, which was usually the teaser.\nSeparating Individual Visualization Views.\nWe further separated the area of different views in the system figures.\nIn most cases, a view is assigned a specific name for identification. However, a view sometimes consists of several independent sub-views. If the data among sub-views are not directly related (such as sharing the axes or connected with visual links), we decompose the view into sub-views for different visualizations.\nEach sub-view is regarded to be an individual visualization and is the basic analysis unit throughout the paper.\nAfter the annotation, we obtained an image collection of 442 views from 124 VA systems.\nFor simplicity, the term \u201cview design\u201d in the rest of our paper refers to the design of a view in a VA system in default.\nAnnotating Task/Data/Type.\nDerived from VisImages, the views include information about the chart types and their positions but do not provide labels for the views (including sub-views) within VA systems.\nWe first characterize the view designs by tuples of their task types, data types, and visualization types.\nFor the task type, we used a task taxonomy of data analysis (Amar et al., 2005 ###reference_b2###) for annotation.\nThe taxonomy contains ten types including retrieve value, derive value, filter, find extremum, sort, determine range, characterize distribution, find anomalies, cluster, correlate, and compare.\nTo avoid bias during the annotation, we identified a task only when the original authors had mentioned it explicitly.\nAfter the annotation, 98.87% (437/442) visualizations contain at least one task type.\nFor the data type, we represent the encoded data by its types in visual encoding channels.\nThe data types include quantitative (Q), temporal (T), ordinal (O), nominal (N), and graph-related (G) data (Satyanarayan et al., 2017 ###reference_b55###).\nFor a view design, we summarize the counts of each data type, such as \u201c.\u201d\nWe further labeled the visualization types of each visualization.\nFor most views, we adhere to the labels used in VisImages, as they were originally assigned based on the taxonomy outlined by Borkin et al. (Borkin et al., 2013 ###reference_b6###).\nFor a composite visualization, we characterize its type by decomposing it into several visual components.\nFor example, a scatterplot matrix can be considered as nesting scatterplots into a matrix, which is represented as a tuple: \u201c.\u201d\nBased on the annotating result, we created a prototype named VAID-Alpha.\nThis interface includes the collected figures, associated tasks, data, and their respective types as searchable indexes.\nAdditionally, it features a search engine for direct access." | |
| }, | |
| { | |
| "section_id": "3.2", | |
| "parent_section_id": "3", | |
| "section_name": "3.2. Study Setup", | |
| "text": "In the workshop study, we asked participants to imitate the process of designing visualization prototypes, with a specific focus on the task of creating multiple views to accomplish VA tasks.\nWe followed the think-aloud protocol and gathered qualitative feedback from participants.\nProblem.\nWe selected mini-Challenge 2 from IEEE VAST Challenge 2021, a classic problem in the visual analytics field, due to the need for tasks and datasets with an appropriate level of complexity.\nSpecially, a company, GAStech, hopes to investigate employees\u2019 potential private use of the company cars.\nTo facilitate analysis, the GPS data of each car, records of car assignments, and records of credit card and loyalty card purchases are provided.\nIn our study, the participants are required to achieve three tasks by designing visualization prototypes.\nThe first task (T1) is designing visualizations with only credit card and loyalty card data to identify the popular location and purchase time and potentially discover some anomalies (e.g., weird purchase time and frequent changing purchase location).\nThe second task (T2) is using car assignment data and GPS data to help determine the owner of each card and trying to find some anomalies (e.g., the card owner and purchase activity are not in the same place).\nThe third task (T3) is to reveal the potential unofficial relationships between the employees.\nParticipants.\nWe recruited 12 VA designers from social media\nand our networks. The participants are postgraduate students (6 females and 6 males) with a research interest in visual analytics for various domains, such as digital humanities, sports analysis, medicine, and urban planning.\nTen of them have published papers in IEEE VIS as the first author.\nWe asked participants to report on their experience in designing visualizations for data analysis.\nBased on the pre-study interview, 6 participants (P1, P3, P4, P8, P10, P11) are Ph.D. students who have more than three years of research for visual analytics, while 3 (P2, P9, P12) have less than one year and the remaining 3 participants have less than two years. Specifically, 3 participants (P1, P3, P12) own bachelor\u2019s degrees in design.\nProcedure. Each trial of the workshop study was conducted one-on-one via online meetings.\nA trial lasted about 60 minutes. Before a trial, we first asked participants for their agreement to collect their design process, comments, and results for research use. After that, a study session started with a 15-minute tutorial introducing the prototype VAID-Alpha\u2019s indexes (i.e., how we define the data representations, task types, and visualization types) and several examples illustrating how to use the interface.\nDuring the study, participants were asked to explain how they understood the problems and tasks and what and why visualizations they wanted to design. The participants needed to sketch the prototype visualizations on paper and illustrate how to use the designs to accomplish the tasks.\nThe session ended with post-study interviews where we gathered qualitative feedback from participants through a series of questions." | |
| }, | |
| { | |
| "section_id": "3.3", | |
| "parent_section_id": "3", | |
| "section_name": "3.3. Results", | |
| "text": "The overall reactions from the participants were positive. Here we summarize the participant feedback." | |
| }, | |
| { | |
| "section_id": "3.3.1", | |
| "parent_section_id": "3.3", | |
| "section_name": "3.3.1. Usefulness Analysis", | |
| "text": "From participant feedback, we found that VAID-Alpha is useful for inspiring new design ideas and enhancing users\u2019 original ideas.\nTo understand the effect on the design, we further analyzed the design process of the users including system logs, audio recordings, and notes.\nIn total, we obtained 36 visualization designs from 12 participants, with 16 (44.4%) inspired from scratch and 14 (38.9%) designs enhanced from an original idea.\nThe numbers also conform to the participant feedback, demonstrating the usefulness of our system.\nThe system facilitates \u201cwarm-start\u201d.\nWe discovered that more participants (P2, P5, P7, P8, P10, P11, P12) are inspired when working on T1 compared to T2 and T3.\nThis might be due to the \u201ccold start\u201d of the visualization design process,\nthat is,\ncontributing a prototype from scratch requires inspiration and therefore might be difficult at the beginning.\nThe comments from P5, a junior PhD student, evidenced this inference: \u201cat the beginning, I have no idea about the design. Therefore, I preferred to explore and find some inspiration from the recommendations.\u201d\nAfter finishing T1, her design for T2 followed the previous idea with some enhancement provided by the recommendations.\nP2, P7, P12, as junior researchers, also shared a similar design process.\nBesides,\nsenior PhD students (P8, P10, P11) also tend to gain inspiration at T1.\nP10 commented that he was inspired by a design similar to \u201cstoryline\u201d after searching with data types and came out with the original design.\nThe system should help users understand the design.\nUsers also expressed concerns about comprehension.\nBoth junior (P2, P3, P8) and senior (P10, P11) PhD students encountered problems understanding the view designs.\nP2 noted, \u201cI need to understand the visual encoding for different designs,\u201d appreciating textual explanations about data structure and task types but still finding some complex designs challenging to comprehend.\nP11 also thought that the contextual information provided by the caption was insufficient." | |
| }, | |
| { | |
| "section_id": "3.3.2", | |
| "parent_section_id": "3.3", | |
| "section_name": "3.3.2. Trade-offs between Data, Task, and Visualization", | |
| "text": "During the study, we also surveyed users\u2019 choices for data, tasks, and visualizations for searching visualization designs of interest.\nThe results are shown in Fig. 1 ###reference_###.\n###figure_1### \u201cData\u201d is the most preferred.\nWe discovered that 6 out of 12 participants ranked the data first, and five participants ranked it second.\nThe participants all agreed that data is the most critical factor to consider during visualization design.\nP4, a senior urban planning analyst, commented that \u201cdata and algorithms are the most critical from the perspective of an expert\u201d.\nThe other two seniors, P10 and P11, held a similar opinion.\nP5, a junior, also regarded the data as her priority consideration for visual design, saying that \u201cthe same data could be represented with different visual representations\u201d.\nHowever, knowing how many columns with different data types are encoded seems insufficient.\nP10 commented, \u201cin real scenarios, data transformation would be performed, and it might be more important to tell how the data are mapped to the visual channels\u201d.\n\u201cVisualizations\u201d are more preferred by designers.\nFour participants ranked visualization first, and three of them (P1, P3, P12) are senior designers with bachelor\u2019s degrees in design.\nAn advantage of searching by visualization types is the consistency between the expectations and outputs.\nP1 liked searching by visualizations because \u201cvisualizations are very intuitive for understanding and I can imagine what results will appear\u201d.\nAmong three dimensions, P3 highlighted his preference for visualization:\u201cwhen I search by visualization, I prefer an exact match and exclude all designs without my selection\u201d.\nHowever, before applying the retrieved designs to their own scenarios, users have to understand the visual encodings.\nBoth junior (P2, P3) and senior (P10, P11) Ph.D. students encountered problems understanding retrieved designs.\nP2 appreciated the textual explanation about the data types and task types, commenting that \u201cthe indication of visual encoding helps me to understand the design easily\u201d.\nThey expected more detailed descriptions of the visualizations, not only the types.\n\u201cTasks\u201d are mixed in understanding.\nEven though two participants ranked tasks first,\nhalf of the participants ranked it the last.\nA common problem is a gap between the original analytical question and low-level tasks.\nP12 commented \u201cI am fuzzy about the task types, so I prefer to consider how to visualize all the data first\u201d.\nBesides,\nP10 pointed out that he cannot map tasks such as \u201cobtaining an overview\u201d to low-level tasks.\nP3 explained that he would have a different understanding of the tasks of the question.\nThose comments call for continued efforts to classify the tasks in VA systems for searching.\nDiscussion. Based on the observations above, the trade-offs between data, task, and visualization during participants\u2019 search for designs might be related to two factors, including the accessibility of the criteria for searching and the representation power of the indexing approach used in the search engine.\nFirst, participants might focus on the inputs and outputs of the process of designing VA designs, which are available criteria for searching.\nMost participants ranked data as their top choice of search condition as data is the most approachable one among data, task, and visualization.\nTo design a visual analytics system, researchers and designers usually start with data exploration and then consider appropriate design to visualize the data.\nOn the contrary, participants who are good at designing might turn to the outputs of the design, i.e., visualizations, and opt to \u201cregress\u201d their desired design through searching.\nVisualizations are graphical representations of the data, which might be the intermediate search condition for the participants.\nAs mentioned by P1,\u201c when I saw the column timestamp, location, and prices, I immediately came across a line chart to represent purchases by time and location. Then I searched the designs based on the line chart.\u201d\nIn this case, the participant considered the data but chose to use visualization as a representation of the data for searching.\nAnalytical tasks are also important in deriving designs, but a common problem is the gap between the original analytical question and low-level tasks, which makes tasks uncertain at the early stage of design.\nMoreover, an analytical task can be approached through multiple design choices.\nFor example, designers can use different visualizations in different layouts (e.g., overloaded and mirrored) to compare values (LYi et al., 2021 ###reference_b43###).\nTherefore, compared to considering tasks at first, practitioners might turn to questions like how to represent the data and what visualizations might be more aesthetically pleasing.\nSecond, participants might suffer from an insufficient capability of VAID-alpha to represent VA designs.\nAs discussed above, participants turn to visualization instead of data might result from the lack of a more representative method to search visual designs.\nMoreover, the tasks were not clear enough so participants chose not to use the task as their first choice to search.\nTo help practitioners better retrieve designs and further understand their design preferences, we concluded several requirements that might help improve the representativeness of the VAID." | |
| }, | |
| { | |
| "section_id": "3.4", | |
| "parent_section_id": "3", | |
| "section_name": "3.4. Requirements for the Index Design", | |
| "text": "We derived three key requirements for improving the current index design according to the findings:\nIntegration of data and visualization.\nData and Visualization are the most preferred. All users\u2019 comments on data and visualization mention the relations between data and visual channels, namely, visual encodings.\nThe indication of visual encodings can help users better understand how the data can be applied to the design. Inspired by the comments, we aim to propose an efficient method to describe the visual encodings in view designs.\nDescription of visualization composition. Many view designs are composite visualizations, and the composition reflects the data relationship between visual elements. For example, a common VA technique is the \u201cglyph scatterplot\u201d where each scatter is represented by a glyph for additional multi-dimension attributes. Such relationships are hard to describe using existing methods. Additional descriptions of such a relationship are required.\nMore detailed descriptions of analytical tasks.\nUsers\u2019 difficulties in mapping real analytical questions to low-level tasks might be because of the lack of analysis goals, such as summarize, compare, and explore (Brehmer and Munzner, 2013 ###reference_b8###). Comparing distribution and summarizing distribution might require visualization designs that are different in visual encodings and layouts. Besides, graph-related tasks are not investigated in depth. Therefore, we decided to incorporate additional task taxonomies with multiple levels." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "4. VAID", | |
| "text": "In this section, we introduce the process of index design based on the above requirements.\nWe intend to represent the index within the JSON structure since the index may include nested elements (R1, R2).\nIn addition, we elaborated on task characterization using the multi-level typology of VA tasks (Brehmer and Munzner, 2013 ###reference_b8###) (R3).\nFormally, we call the structure \u201cVAID\u201d in the paper, which consists of a two-tuple:\nIn the upcoming section, we introduce the task and design in detail." | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "4.1. VAID Task", | |
| "text": "###figure_2### Given that the low-level task taxonomy might be insufficient for users to understand VA tasks, we improve the task structure based on Brehmer and Munzner\u2019s taxonomy of VA tasks (Brehmer and Munzner, 2013 ###reference_b8###).\nIn their taxonomy, a VA task is described with three levels, namely, why, what, and how.\nThe why level refers to the goals of VA, such as present, compare, and browse.\nThis level also describes human behaviors during analysis, so it is named \u201caction\u201d (Munzner, 2014 ###reference_b48###).\nThe what level explains the analytical \u201ctargets\u201d in VA, such as raw data, specific attributes, or data patterns.\nOur current task taxonomy can be regarded as a subset of these targets.\nMoreover, the how level describes the methodology used to achieve the \u201cactions\u201d and \u201ctargets,\u201d including view designs and algorithms.\nIn this work, we focus on the structure of view designs for the how level and do not consider it a part of the analytical task structure.\nWe use action-target pairs to describe the analytical tasks.\nThe actions are the same as their definitions in the original taxonomy.\nFor the targets, we refer to Amar et al.\u2019s low-level task taxonomies for tabular data (Amar et al., 2005 ###reference_b2###) and Lee et al.\u2019s task taxonomies for graph data (Lee et al., 2006 ###reference_b35###).\nThe classifications of actions and targets are summarized in Fig. 2 ###reference_###.\n\nThe detailed annotation process for tasks is presented in the subsequent subsection, conducted together with the VAID design.\nWe identified action-target pairs only when explicitly mentioned by the original authors, and any disagreements during the process were resolved following the same strategy." | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "4.2. VAID Design", | |
| "text": "For view designs, we aim to identify visual encodings inside, which are the mappings from data to visual channels and layouts.\nSpecifically, we regard each design as a composite visualization (Javed and Elmqvist, 2012 ###reference_b29###; Deng et al., 2023a ###reference_b16###).\nWe begin by recognizing the overall layout, such as faceting, and then break it down into various visual components.\nEach component is an independent visualization of specific types, such as bar charts, line charts, and Sankey diagrams.\nFor the designs of well-crafted glyphs that are not just combinations of different visualization types, we regard them as \u201cothers\u201d type.\nThen we recognize the visual encodings for each component.\nWe use Vega-Lite (Satyanarayan et al., 2017 ###reference_b55###) as a starting structure because their JSON syntax is intuitive for representing visual structures.\nSpecifically, it characterizes a visualization with the fields of \u201cmark\u201d and \u201cencoding\u201d.\nIn the field \u201cencoding\u201d, the data \u201cfield\u201d, \u201ctype\u201d, and \u201caggregate\u201d are further specified.\nMoreover, it supports basic visual compositions, such as faceting, concatenating, and layering.\nWe iteratively developed the structure to cover the collected view designs and annotated each design.\nThe process of extension and annotation consisted of the following four stages.\n###figure_3### In the first stage, four authors annotated visualizations with original Vega-Lite.\nWe discovered that the Vega-Lite did not support the description of graph-related visualizations, such as Sankey diagrams and tree visualizations, which are common visualization types in view designs. In addition, complex visual compositions are not supported, such as embedding glyphs in graph nodes.\nWe attempted to extend the structure based on the failed cases.\nWe extended the original Vega-Lite structure from three perspectives:\nFirst, we added additional data types (e.g., relational data) and regarded \u201cnode\u201d and \u201clink\u201d to be the two visual channels of graph-related visualizations.\nWe further specified the properties of the nodes and links, such as positions and widths, under the \u201cnode\u201d and \u201clink\u201d labels.\nAn example structure is presented in Fig. 4 ###reference_###A.\nSecond, to handle complex visual compositions, such as embedding glyphs in graph nodes (Elmqvist and Fekete, 2010 ###reference_b20###), we have added a composition type \u201cnested\u201d.\nThe nested visualizations are represented by specifying the \u201cparent\u201d and \u201cchildren\u201d components.\nWe also use a key \u201ccanvas\u201d to indicate which elements of the parent are the embedded children components.\nThe structures of the compositions, i.e., \u201cconcat,\u201d \u201clayer,\u201d and \u201cfacet,\u201d and \u201cnested\u201d composition, are shown in Fig. 4 ###reference_###B.\nThird, the mark types supported by Vega-Lite are also insufficient for the representation of view designs.\nWe have extended the mark types by adding new ones like graph, Sankey, and radar, referring to the typologies proposed by Borkin et al. (Borkin et al., 2013 ###reference_b6###).\nIt is noted that the newly added mark types are not graphical primitives that are elemental building blocks of the visualization.\nInstead, some of them are \u201cmacros for complex layered graphics that contain multiple primitive marks\u201d(Satyanarayan et al., 2017 ###reference_b55###), which are consistent with the definitions of Vega-Lite.\nFig. 4 ###reference_### provides an example of complex composition relationships, such as nesting bar charts into node-link graphs.\n###figure_4### In the second stage,\n\nwe independently annotated the views using our labeling system, including tasks and designs, according to the descriptions in the \u201cVisual Design\u201d and \u201cCase Study\u201d sections in the original papers.\nIn instances where the desired information was unavailable, we reviewed the entire paper.\nAdditionally, for the design structure, we identified cases not addressed by the extended structures.\nWeekly online discussions are conducted to synchronize and address cases.\nThe procedure includes initially creating a shared document that outlines the structure and subsequently updating it until all cases can be covered with the extended structures.\nA final document incorporating design structure and annotation examples for various cases was derived at that stage.\nIn the third stage, each author revised their annotation results using the document from the second stage.\nOne of the authors systematically compared all results, labeling any disagreements.\nThese conflicts were recorded in our system, and resolution occurred through discussions among all authors during weekly online meetings, resulting in updated results directly.\n\nFinally, one of the authors double-checked the results again for all details.\nAs a result, we obtained the index structure for VAID design (Fig. 3 ###reference_###) and annotated 442 view designs following the structure.\nDuring the annotation, we followed the idea of consistency.\nIn detail, we annotated the view while striving to preserve the original Vega-Lite structure as much as possible.\nCompared to the original Vega-Lite structure, we extensively expand the properties of composition, marks, encoding types, and data types based on the VA designs we collected.\nIt is noted that the Vega-Lite structure also provides powerful operators for data transformations, such as filtering.\nHowever, in VA research, many techniques involve complex data processing methods, such as dimensional reduction, and the classification and identification of these methods are challenging.\nIn this work, we currently focus on view designs and only use part of the Vega-Lite structure to characterize visual encodings (e.g., excluding style-related parameters) that help for better view indexing and understanding.\nMoreover, we followed the idea of minimization, i.e., choosing the one with the least number of duplications, to address the problem when there are multiple solutions to a visualization.\nFor example, the component in Fig. 3 ###reference_###(B2) can be regarded as \u201ca layered visualization with a density plot and a graph\u201d and \u201ca facet visualization whose elements are pie charts\u201d from the perspective of implementation.\nThe positions of the graph nodes and pies both repeatedly encode the row and column attributes.\nReferring to the original descriptions (Fu et al., 2017 ###reference_b22###), the pies without links would fade out, indicating a one-to-one mapping of the graph nodes and pies.\nTherefore, it would be more appropriate to consider the visualization as a layered one composited by a density plot and a nested graph visualization (pie charts embedded into the nodes).\nFor another example, a faceted visualization can be considered as a concatenation of a list of similar visual components.\nRepresenting the chart with \u201cconcat\u201d has to duplicate the structures of similar visual components multiple times.\nInstead, it is much neater and more accurate to use \u201cfacet\u201d to represent it in the context of data visualization.\n###figure_5###" | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "5. Evaluating VAID through question-based user study", | |
| "text": "We conduct a user study to evaluate if VAID can assist users in view design.\nTo allow users to experience the design search using VAID, we developed a prototype system named VAID Explorer.\nIn this section, we first provide a brief overview of the prototype, followed by an in-depth discussion of the user study." | |
| }, | |
| { | |
| "section_id": "5.1", | |
| "parent_section_id": "5", | |
| "section_name": "5.1. VAID Explorer", | |
| "text": "We will present the prototype and describe how it is used in the following section.\nThe prototype includes a filtering panel (Fig. 5 ###reference_###A), a gallery view (Fig. 5 ###reference_###C), an indexing view (Fig. 5 ###reference_###B), and a detail view (Fig. 5 ###reference_###D).\nThe filtering panel (Fig. 5 ###reference_###A) supports view design search.\nUsers can select values corresponding to different keys introduced in Sec.4 ###reference_###.\nWe also develop an indexing view (Fig. 5 ###reference_###B) that enables users to input structural indexes with JSON syntax.\nThe retrieved results will be displayed in the gallery view (Fig. 5 ###reference_###C).\nWhen clicking on a result, a detail view (Fig. 5 ###reference_###D) will pop up, showing the VAID along with other contextual metadata (e.g., paper title, paper keyword, figure caption). \nUsers can also explore the designs of other views.\n###figure_6### We present a concise scenario featuring Sherry, a visualization researcher in the field of urban planning, to demonstrate the usage of the prototype.\nShe was given a dataset about credit card records, where there are four columns of \u201ccard id,\u201d \u201ctime,\u201d \u201cstore,\u201d and \u201citem name.\u201d\nShe wants to identify the most popular purchasing time and store, but she\u2019s uncertain about how to represent this data.\nWith the VAID Explorer, she first starts from the filtering panel (Fig. 5 ###reference_###(A)) and selects two data types, nominal and temporal.\nRetrieving 46 view designs, she explores the results and discovers that the third example (Fig. 5 ###reference_###D) can show the data across \u201ctime\u201d and \u201cstore.\u201d\nTo further show the popularity of different times and stores, she wonders how to show distribution with a similar design.\nShe copies the index of this design into the indexing view (Fig. 6 ###reference_###A).\nShe revises and only keeps the sub-structure of the index, and selects the target of \u201cdistribution\u201d (Fig. 6 ###reference_###B).\nShe identifies a design with bar charts and area charts showing the summarization of the nominal dimension and temporal dimension, respectively.\nBased on the example, she has some preliminary thoughts on the view design." | |
| }, | |
| { | |
| "section_id": "5.2", | |
| "parent_section_id": "5", | |
| "section_name": "5.2. Study Setup", | |
| "text": "Our goal is to understand if participants can understand the retrieved designs (e.g., visual encodings) using VAID and use the prototype to obtain design inspirations for VA problems.\nSpecifically, we ask participants to search for visualizations to solve six well-designed VA problems and subsequently, design views.\nThe VA problems have varying complexity and are related to specific analytical tasks, mark types, composition types, or data types,\nsuch as \u201cto find visualizations that encode a three-dimensional dataset with a nominal, a quantitative, and a temporal field\u201d and \u201cto find VA designs for comparing distributions\u201d.\nFor the design task, we choose Mini-Challenge 2 from IEEE VAST Challenge 2022 (Chairs, 2022 ###reference_b12###).\nThe detailed list can be found in the supplementary material.\nGiven the retrieved results, participants were asked to explore the results and select one visualization of interest for in-depth investigation, that is, to understand the design by reading images, indexes, and other metadata (e.g., titles and captions).\nLastly, we ask participants to explain the design of the visualizations.\nParticipants. We recruited 12 visualization practitioners (U1-U12) from our institution through social media and word-of-mouth who reported having experience in creating visualizations for data analysis. The participants include 5 females and 7 males with various backgrounds, including computer science, urban, and digital media design.\nThey are used to analyze data with toolkits such as Python, R, and MATLAB for data analysis. In addition, the libraries they use for data visualizations are Vega-Lite, Excel, Python Matplotlib, and Javascript D3.\nThe participants in this user study all have adequate data visualization or design knowledge but have varying expertise in designing more complex VA systems.\nProcedure.\nAll studies were conducted through one-on-one online meetings.\nEach study consisted of two sessions: a training session (15 minutes) and an experiment session (20 minutes). In the training session, we introduced the definitions of VAID, including taxonomies of task and design.\nThen we introduced the use of the prototype. All participants were allowed to use and explore the data freely to get familiar with the prototype.\nIn the experiment session, each participant was asked to accomplish seven questions (or tasks, but to avoid confusion with the \u201ctask\u201d dimension in VAID, we use the term question here). \nDuring the whole study, we followed the think-aloud protocol. Participants were requested to speak out about their understanding of the retrieved view design and their thoughts about the VAID or prototype when accomplishing tasks.\nThe study ended with a post-study interview session as well as a questionnaire for rating the VAID from different dimensions.\nThe whole user study lasted about 1-1.5 hours. Each participant received $9 as compensation.\nThe authors took notes to record feedback during the study.\n###figure_7###" | |
| }, | |
| { | |
| "section_id": "5.3", | |
| "parent_section_id": "5", | |
| "section_name": "5.3. Results and Feedback", | |
| "text": "All users successfully found the required designs and comprehended the design using VAID. For Question 7, they designed and sketched several views. All sketches can be found in the supplementary materials.\nThe quantitative results from the participants were very positive, as shown in Fig. 7 ###reference_###. Eleven out of twelve participants strongly agreed that based on VAID, the prototype helps to find useful visualization designs for achieving the VA tasks in the study. Similarly, 10 participants strongly appreciated the diversity of VA designs in VAID Explorer.\nVAID is easy to understand and benefits design understanding.\nMost users (11/12) find VAID easy to understand.\nSome (U1) attribute this ease to their familiarity with Vega-Lite, facilitating a swift adaptation.\nOthers, unfamiliar with Vega-Lite, emphasize the significance of VAID\u2019s JSON format and declarative language for clarity.\nU6 further emphasized that basic chart knowledge aids in understanding VAID\u2019s design structure.\nAdditionally, users appreciate VAID\u2019s role in simplifying visual encoding comprehension.\nU5 and U7 noted that VAID complements textual elements like titles and captions, offering insights beyond what these elements convey alone.\nU1 underscored VAID\u2019s importance in clarifying glyph-related aspects to prevent uncertainties in visual interpretation.\nTheir viewpoint confirmed the fulfillment of design requirement R1.\nDespite its clarity, U7 still mentioned that referencing the research paper may still be necessary for more complex aspects.\nVAID Explorer enables users to swiftly derive initial designs based on the given question.\nAll users start Question 7 very quickly.\nFor example, U12 started by selecting different combinations of \u201cactions\u201d and \u201ctargets\u201d values that might conform to the questions and said, \u201cPreviously, I required time to grasp background information; however, now I can randomly select filter parameters to explore potential designs. Examining these designs helps me better understand the question and formulate a vague initial design as a starting point.\u201d\nUsers agreed that the structure of VAID aligns with common design strategies, wherein they typically approach the view design by considering data, task, and visualization.\nU11commented \u201cThe filter options align with my way of thinking about the design question. I find it easy to kickstart the process, given that the question description provides the necessary information. Subsequently, I can quickly discover inspiration.\u201d\nVAID facilitates comprehensive search.\nThe majority of participants expressed satisfaction with the search results during the exploration.\nUsers pointed out, \u201cWhile exploring, I aim to retrieve all pertinent designs without any omissions.\u201d\nU6 complimented, \u201cMy personal preferences may introduce biases and potentially result in overlooking valuable papers. The use of the system, however, ensures a more thorough exploration.\u201d\nAdditionally, U4 highlighted, \u201cIt operates as a knowledge-based retrieval system, effectively supplementing my knowledge.\u201d\nCompared to the preliminary study, the extension of VAID facilitates more flexible search options.\nSpecifically, U3 and U5 appreciated the flexible task options when designing a multi-view VA system. They emphasize its effectiveness within a VA system, where the task\u2019s target remains constant while actions vary between views.\nU5 also commented with an example, \u2018\u2018VAID enables designing VA systems in a manner of progressive exploration, identification, and localization of anomalies.\u201d\nTheir opinion verified that the design requirement R3 had been fulfilled.\nHowever, even with the enhanced flexibility in search options provided by VAID, some users still faced challenges while choosing actions.\nU11 expressed that the concise one-word descriptions lack intuitiveness.\nShe suggests, \u201cincluding examples and images as hints would help me make more informed task choices.\u201d\nWith the development of large language models (LLMs), a potential solution is to introduce an LLM to help translate the analytic questions into abstract actions and targets, which might lower the burden of using the system.\nMoreover, the search function was praised by users for its user-friendly nature, as U11 said, \u201cThe filtering and indexing view can complement each other. While the filtering is easy to use but less precise, the index search can assist in retrieving more specific designs, like a bar chart with temporal data on the x-axis.\u201d\nDespite the positive feedback, there is room\nfor improvement.\nThe current combination of multiple options may result in limited or no results, potentially leading to neutral satisfaction among some users as shown in Fig. 7 ###reference_###.\nU12 recommended incorporating a partial matching mechanism to guarantee a significant number of retrieved designs, even under slightly stringent conditions.\nU8 echoed similar sentiments, proposing the addition of a recommendation mechanism akin to common search engines.\nAdditionally, U5 expressed a desire for improved linkage between the filtering view and the indexing view. He mentioned, \u201cTyping a JSON structure from scratch is not easy, but editing one is simpler.\u201d U5 hoped for an initial draft in the index view after selecting options in the filtering view.\nTherefore, valid ranking mechanisms and linkage query functions for view designs can be developed in the future to support an effective visualization query system.\nVAID facilitates incremental design, helping users refine their designs step by step.\nView designs are mainly composite visualizations (Deng et al., 2023a ###reference_b16###), which can be further broken down into various basic charts (Sec. 4.2 ###reference_###). Designers usually start from one basic visualization type that can be decided.\nFor example, many users recognize \u201cmaps\u201d in Question 7 due to the urban scenario.\nSome users also search for initial designs by integrating data, tasks, and their expertise.\nBuilding upon this concept, the VAID Explorer empowers users to employ basic visualizations as search parameters, enabling the design of intricate and extended designs (R2).\nU10 stated, \u201cI usually use several basic charts to meet design requirements. After that, I explore ways to refine the design by integrating these charts into a unified view.\u201d She believed that using our tool makes this refinement step easier than before.\nU9 conveyed a strong appreciation for layouts, citing challenges in keyword-based searches due to reliance on personal knowledge, \u201cVAID Explorer addresses this by offering nested or layered layout options\u201d.\nThis preference aligns with sentiments from U5 and U7, who stress spatial constraints in VA views and the importance of thoughtful layout choices within limited space.\nMeanwhile, some users also suggest that the complexity of the design allows for a selective approach to the retrieved designs. For example, in Question 7.3, U7 may opt to utilize only the color and size encoding within a view design from TPFlow (Liu et al., 2019 ###reference_b40###).\nU5 also employed a similar approach. Initially, he selected the circular bar design from one view in (Zhou et al., 2019 ###reference_b73###), and later chose the area chart design from one view in (Zhao et al., 2017 ###reference_b71###) to address the question.\nVAID enhances design aesthetics.\nVAID not only aids in completing a design but also provides additional assistance, as highlighted by U2, who pointed out that VAID contributes to enhancing aesthetics during view design.\nMoreover, users may further improve the design using VAID, even when it effectively achieves the intended task.\nFor example, in Question 7.1, U9 initially obtained a design that satisfied the question. However, upon observing a particular view in Volia (Cao et al., 2018 ###reference_b10###), she recognized the potential of using quadrilaterals or hexagons as the smallest units during map segmentation, which helped her to refine the design accordingly.\nIn Question 7.3, U6 noticed that bar charts were commonly used in previous designs, leading to boring designs that lack adequate novelty. This prompted U6 to explore alternative views within the last referred VA interface, aiming for more varied designs. By combining this exploration with insights from the VAID structure, U6 took a radical approach." | |
| }, | |
| { | |
| "section_id": "6", | |
| "parent_section_id": null, | |
| "section_name": "6. The analysis of VA design collection using VAID", | |
| "text": "After validating the usefulness of VAID, we explore the \u201cdesign demographics\u201d (Hoque and Agrawala, 2020 ###reference_b27###; Battle et al., 2018 ###reference_b3###) based on VAID for view designs with collected views in Sec. 3.1 ###reference_###, which demonstrates VAID enables fine-grained exploration and understanding of existing view designs.\nIn particular, we report on the statistics, including the frequency and co-occurrence patterns for the analytical tasks and visual designs." | |
| }, | |
| { | |
| "section_id": "6.1", | |
| "parent_section_id": "6", | |
| "section_name": "6.1. Overview", | |
| "text": "We first analyze the indexes to understand the composition of visualization designs in visual analytics.\n###figure_8### Only 38% view designs can be implemented with Vega-Lite.\nWe investigated the indexes and opted to understand whether these visualizations can be implemented with common compositions (layer, concat, and facet) of basic mark types with Vega-Lite.\nWe discovered that only 38.2% (169/442) designs could be specified with pure Vega-Lite structures.\nThe results indicate that researchers tend to use novel techniques in VA systems to visualize the data, which demonstrates the unique value of our structures.\nThe limitations of Vega-Lite mainly lie in the limited mark types and the lack of support for graph-related data and nested visualizations.\nThe limited expressiveness of declarative visualization grammars may be a reason for the result that the visualization designs of most existing VA systems are implemented with lower-level Javascript libraries (e.g., D3.js), as they could provide flexible customizations for the designs.\nAbout 64% view designs are composite visualizations.\nComposite visualizations combine multiple visual components together along specific directions (e.g., \u201clayer\u201d, \u201cconcat\u201d and \u201cfacet\u201d in Vega-Lite) or in a hierarchical manner (i.e., \u201cnested\u201d), which can make the visual components well-organized and easy to interpret.\nAs shown in Fig. 8 ###reference_###A, 63.8% (282/442) of the view designs contain composite visualizations.\nWe focus on the number of composition labels in a structure.\nFig. 8 ###reference_###A shows that 34.8% (154/442) of visualizations contain only one composition.\n3.6% (16/442) visualizations contain at least four compositions.\nA visualization can have several types of compositions (four different types in total).\nOnly one visualization has used the maximum number of different composition types, which is four (Fig. 8 ###reference_###B).\nFig. 8 ###reference_###B shows that most visualizations have no more than two composition types.\n90% composite visualizations have a hierarchy level 2.\nAs described in subsection 4.2 ###reference_###, the visualization is represented in a hierarchical JSON syntax with multiple levels of composition. For example, the visualization presented in Fig. 3 ###reference_### has a composition level of three.\nFor composite visualizations, 54.6% (154/282) have a level of one, 34.0% (96/282) have a level of two, and 8.1% (23/282) have a level of three.\nOnly 9 visualizations have a level of four, which is the maximum level of the hierarchy.\nThe numbers indicate that most composite visualizations only use one or two levels of composition.\nAdding more levels of composition demands encoding more data columns, which might go beyond the requirement of analysis scenarios.\nMoreover, more levels of composition increase implementation difficulty and visual complexity.\nMore compositions, more tasks achieved. Investigating VAID, we discover that all visualizations have at least one action-target task. The ones with composition achieve 1.49 tasks on average, while the ones without composition are designed for 1.33 tasks on average.\nFig. 8 ###reference_###D shows the average number of tasks vs. the number of compositions.\nOverall, the number of tasks to be solved will increase with the increase of compositions." | |
| }, | |
| { | |
| "section_id": "6.2", | |
| "parent_section_id": "6", | |
| "section_name": "6.2. Frequency Analysis", | |
| "text": "###figure_9### We then report on the frequency of different property values and findings in VAID.\nActions: \u201clow-level\u201d actions are the most used analytic actions.\nAs shown in Fig. 9 ###reference_###A, present is the most frequent action.\nThe results show that many designs are merely used for data exhibition.\nTherefore, we excluded present in the later analysis since designers commonly use terms like \u201cshow,\u201d \u201cvisualize,\u201d and other ambiguous words to explain their use of such view designs.\nAfter excluding this category, compare, identify, and summarize are the most popular.\nThese three actions are categorized into \u201clow-level\u201d query actions by Brehmer and Munzner (Brehmer and Munzner, 2013 ###reference_b8###).\nFor the goal of searching, explore is the most popular one, which stands for exploratory analysis.\nInterestingly, we did not discover designs that are used for enjoy, showing the difference between visual analytics and infographics.\nTargets: domain-specific values are the most popular targets.\nFrom target distribution (Fig. 9 ###reference_###B), we discovered that the most popular target is value, which refers to visualizing values that are computed from metrics or algorithms.\nThis reflects the features of VA, which closely collaborates with domains and utilizes data mining techniques for data preprocessing.\nDistribution and correlation are the second and third popular targets.\nThe results demonstrate that understanding the correlations and distributions of the attributes are the key indicators for data patterns in VA.\nFor graph data, links and the whole graphs are the most frequently visualized targets.\nCompositions: simple is preferable.\nComposition distribution is presented in Fig. 9 ###reference_###C. Facet, which organizes visualizations of the same types by rows and columns, is the most used composition type in VA.\nThis simple composition conveniently visualizes one or two more dimensions with simple visualization building blocks (e.g., scatterplot matrix).\nConcat is the second popular type, which refers to placing visualizations with different types side by side.\nNested composition is not covered by the original Vega-Lite.\nAlthough it has the smallest proportion, it accounts for more than 10%.\nMarks: basic types dominate.\nIn a composite visualization, each visual component is regarded as a specific mark type. We display the mark types that have more than 20 records (Fig. 9 ###reference_###D). The distribution demonstrates that bar, point, line, and rect are the most popular mark types, which are also basic mark types in Vega-Lite. For the types that Vega-Lite does not cover, graph and unit (Park et al., 2017 ###reference_b50###) visualizations rank and among all types. The type others ranks , indicating that glyph visualizations are also commonly used in view designs.\nChannels: most relate to the Cartesian coordinate system. We show the visual channels with more than 10 records in Fig. 9 ###reference_###E. The channels x, y, and color are the most popular. The channels link and node are also frequently used because of the graph-related visualizations, such as Sankey diagrams, graphs, and trees.\nData Types: about 90% fields are quantitative and nominal. Among all data types, quantitative data and nominal data are the most frequently encoded in the visualizations (Fig. 9 ###reference_###F), followed by node data, which is commonly used in graph data.\nAggregate: binning/counting, or more complex operations. Aggregate types are labeled based on Vega-Lite aggregation operations (Fig. 9 ###reference_###G). Aggregate count and bin are the most frequently used types, which are usually because of the visualization of histograms. Other aggregate types are not frequently used in view designs. The reason might be that complex data processing methods and metrics are adopted in VA, instead of basic aggregation strategies, such as sum, median, and variance.\nField Names: time and feature analysis are main characters. Field names are the terms used in the original VA research papers describing the fields. We show the word frequency of field names using word cloud (Fig. 9 ###reference_###H). From the word cloud, we immediately discover that the words time, feature, metric, and dimensionality reduction have a relatively large size, indicating the values about these terms are frequently used in VA research." | |
| }, | |
| { | |
| "section_id": "7", | |
| "parent_section_id": null, | |
| "section_name": "7. Discussion", | |
| "text": "In this section, we discuss the potential avenues for future research on VAID and its limitations." | |
| }, | |
| { | |
| "section_id": "7.1", | |
| "parent_section_id": "7", | |
| "section_name": "7.1. Opportunties for Future Research", | |
| "text": "We identified multiple research opportunities grouped into three primary avenues.\nFirst, VAID offers the potential to enhance view design assessment.\nWhile our statistical analysis of 442 designs has yielded valuable insights (Sec. 6 ###reference_###), there remains an opportunity for deeper analysis through the integration of VAID.\nFor example, in the current process of VA design, the selection of design alternatives is mainly guided by design principles (Wu et al., 2023 ###reference_b65###).\nVAID can retrieve potentially useful designs regarding data and tasks, which complements design alternatives for a more comprehensive discussion and justifications.\nSuch an index structure accompanied by a database can help to improve the rigor of VA research.\nFuture research can focus on developing an evaluation method for VA designs based on VAID since this structured approach transforms abstract designs into a more analyzable format, enabling the application of various analysis techniques (e.g., regression, clustering).\nSecond, VAID presents the opportunity to simplify comparisons of view designs.\nAlthough VA designs have long been criticized for their over-crafted designs for specific domain problems (Wu et al., 2023 ###reference_b65###), they might share similarities in specific views, components, and tasks.\nAs highlighted by U5 and U9, the importance of comparing designs cannot be understated during the exploration process.\nVAID allows for comparisons in different dimensions, revealing both commonalities and differences in analytical tasks and visual designs. Future research can focus on enhancing the effectiveness of comparisons between different designs.\nThirdly, we envision VAID as an initial step toward enhancing the automation in VA.\nWhile there have been efforts to automate the creation of visualizations (Wu et al., 2022 ###reference_b66###), there has been limited exploration of automation within the realm of complex VA design.\nAutomating VA design requires large-scale datasets in need of training, which necessitates detailed information for VA designs.\nOne challenge in this regard is the mismatch between the intensive visual information conveyed and the limited accessible information through captions and figures. In this regard, VAID takes on a crucial role as an initial step in augmenting the accessible information.\nAdditionally, we encourage the open-sourcing of more VA systems, as they represent valuable outcomes of iterative design. The designs and system code shared through open-source projects will serve as valuable resources for the community.\nThrough these collective efforts, we can gradually simplify the production process of VA systems, ultimately achieving automation in VA design." | |
| }, | |
| { | |
| "section_id": "7.2", | |
| "parent_section_id": "7", | |
| "section_name": "7.2. Limitations", | |
| "text": "As a first attempt to construct an index structure for VA design from the perspectives of tasks and visual designs, our work has several limitations that warrant future research.\nInteractions.\nInteractions are important features of VA systems.\nHowever, it is difficult to recognize the interactions from VA designs, even in an entirely manual manner, since not all interactions within/between views are introduced in the original papers.\nMoreover, the use of static images hiders our analysis of configurable VA systems (e.g., Turkay et al. (Turkay et al., 2016 ###reference_b62###)), as the configuration frameworks are not reflected in the images.\nFacilitating better analysis of view relationships requires parsing live VA systems and constructing the data flow between views.\nGeneralizability.\nIn this study, we designed and evaluated VAID based on high-quality VA designs from top-tier conference papers.\nThese designs make up a corpus that comprises composite and multiple-view visualizations, which were recognized to be complex and hard to understand (Wu et al., 2023 ###reference_b65###).\nAs a result, VAID is capable of representing visualization designs with complex structures.\nWe believe that VAID can be used to index and represent a wider range of visualization designs, such as infographics, which are usually facilitated with novel layouts and glyphs (Ying et al., 2022 ###reference_b69###) that improve the expressiveness of information.\nHowever, it introduces additional challenges because infographics usually contain distorted graphical elements for metaphoric representation (Ying et al., 2023a ###reference_b68###) and the combination of additional modalities, such as text and images.\nIn this study, we started from the VA community and derived VAID as a kickoff for the research of indexing such complex visualizations.\nIt requires future research to evaluate and extend VAID with a more general dataset.\nEvaluation.\nIn our two in-lab studies, participants are required to complete the VAST mini-challenge in a short time.\nWe hope to synthesize the scenario of creating VA designs, but real-world VA design often involves collaboration with domain experts.\nWhile we tried to avoid tasks requiring specialized knowledge, fully simulating authentic collaborative scenarios remains a challenge.\nIn the future, we hope to carry on a field study with VAID, asking VA experts to use VAID in their routine design process, observing their behaviors, and gathering more comprehensive feedback from their experience.\nScalability. In this work, the scalability of annotation is limited because it requires extensive visualization knowledge for annotating such a fine-grained structure.\nOur work is rooted in the fact that there lack of practical rules and guidance in decomposing view designs in VA.\nAs a starting point, we manually annotate and fine-tune the structure iteratively with a workshop study, aiming to construct a solid foundation for the indexing.\nSuch manual efforts were expensive and resulted in a relatively small dataset size.\nIn the future, we plan to improve VAID with a combination of machine learning methods.\n\nThese methods not only ease the effort of manual labeling but also enhance the information available.\nIn terms of the former, approaches like VisImages (Deng et al., 2023c ###reference_b18###) leverage computer vision models to detect view locations in research papers. Efforts can also be made to extract visual structures such as maps (Poco et al., 2018 ###reference_b52###), charts (Poco and Heer, 2017 ###reference_b51###; Savva et al., 2011 ###reference_b56###; Ying et al., 2023b ###reference_b70###), and PowerPoint slides (Shi et al., 2022 ###reference_b57###).\nIt is possible to adopt deep learning models to detect the positions of visual elements and reconstruct their relations.\nRegarding the latter, additional information can be valuable. For instance, utilizing computer vision models to derive color palettes aids in analyzing the emotional tone of designs (Lan et al., 2023 ###reference_b34###) and inspires future designers (Shi et al., 2023 ###reference_b58###). Annotations and other text information extracted using OCR techniques (Memon et al., 2020 ###reference_b46###) from charts can serve as supplementary material, aiding users in understanding essential information such as the data narrative and context (Ren et al., 2017 ###reference_b53###)." | |
| }, | |
| { | |
| "section_id": "8", | |
| "parent_section_id": null, | |
| "section_name": "8. conclusion", | |
| "text": "We built an index structure, VAID, from visual analytics research papers.\nThe structure features an index for describing complex VA designs from the perspectives of analytical tasks and visual designs.\nVAID is constructed iteratively through a workshop study with 12 VA designers.\nThe structure provides opportunities to understand and utilize state-of-the-art visualization designs, which are demonstrated through a user study.\nHowever, given that designing a visual analytics system is a complex procedure,\nwe note that our work is the first step toward understanding and indexing VA systems.\nWe hope that our VAID and lessons learned could provide a helpful foundation for further research." | |
| } | |
| ], | |
| "appendix": [], | |
| "tables": {}, | |
| "image_paths": { | |
| "1": { | |
| "figure_path": "2211.02567v2_figure_1.png", | |
| "caption": "Figure 1. \nThe frequency of users\u2019 preference rankings for data, tasks, and visualization. For instance, 6 participants ranked \u201cData\u201d as their top choice.", | |
| "url": "http://arxiv.org/html/2211.02567v2/x1.png" | |
| }, | |
| "2": { | |
| "figure_path": "2211.02567v2_figure_2.png", | |
| "caption": "Figure 2. \nA VA task consists of a dual-key index.\nThe action\u2019s value is selected from four classes, with each class having a single subclass chosen, so as the target\u2019s value. The task \u201cenjoy + values\u201d is exemplified in red strokes.", | |
| "url": "http://arxiv.org/html/2211.02567v2/x2.png" | |
| }, | |
| "3": { | |
| "figure_path": "2211.02567v2_figure_3.png", | |
| "caption": "Figure 3. We follow a Vega-Lite style to describe complex view designs. (A) Our formal index structure for visual designs and (B) an example with the index.\nThe structure for representing each chart, including marks and encodings, is simplified and indicated using \u201c[] part\u201d with black text. To illustrate, we provide an example of the bar part in the upper left corner.", | |
| "url": "http://arxiv.org/html/2211.02567v2/extracted/5429227/spec_example_v1.png" | |
| }, | |
| "4": { | |
| "figure_path": "2211.02567v2_figure_4.png", | |
| "caption": "Figure 4. Example indexes of (A) graph-related visualizations and (B) visualizations with different compositions.", | |
| "url": "http://arxiv.org/html/2211.02567v2/x3.png" | |
| }, | |
| "5": { | |
| "figure_path": "2211.02567v2_figure_5.png", | |
| "caption": "Figure 5. The VAID Explorer contains a filtering view (A), an indexing view (B), a gallery view (C), and a detail view (D).", | |
| "url": "http://arxiv.org/html/2211.02567v2/extracted/5429227/prototype_v2.png" | |
| }, | |
| "6": { | |
| "figure_path": "2211.02567v2_figure_6.png", | |
| "caption": "Figure 6. A usage scenario using VAID to design new visualizations for visual analytics.\nThe prototype supports searching view designs by detailed indexes (A) and key values (B).", | |
| "url": "http://arxiv.org/html/2211.02567v2/extracted/5429227/scenario_v2.png" | |
| }, | |
| "7": { | |
| "figure_path": "2211.02567v2_figure_7.png", | |
| "caption": "Figure 7. User ratings from the perspectives of understandability, interpretability, satisfaction, usefulness, and diversity. The number on the right illustrates the average score and the 95% confidence interval.", | |
| "url": "http://arxiv.org/html/2211.02567v2/x4.png" | |
| }, | |
| "8": { | |
| "figure_path": "2211.02567v2_figure_8.png", | |
| "caption": "Figure 8. Overview of the view designs in terms of compositions.", | |
| "url": "http://arxiv.org/html/2211.02567v2/x5.png" | |
| }, | |
| "9": { | |
| "figure_path": "2211.02567v2_figure_9.png", | |
| "caption": "Figure 9. The property distribution of view designs: actions (A), targets (B), composition types (C), mark types (D), visual channels (E), data types (F), aggregate types (G), and field names (H).", | |
| "url": "http://arxiv.org/html/2211.02567v2/x6.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [ | |
| { | |
| "1": { | |
| "title": "Low-Level Components of Analytic Activity in Information Visualization. In Proceedings of IEEE Symposium on Information Visualization. 111\u2013117.", | |
| "author": "R. Amar, J. Eagan, and J. Stasko. 2005.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "2": { | |
| "title": "Beagle: Automated Extraction and Interpretation of Visualizations from the Web. In Proceedings of CHI Conference on Human Factors in Computing Systems. 1\u20138.", | |
| "author": "Leilani Battle, Peitong Duan, Zachery Miranda, Dana Mukusheva, Remco Chang, and Michael Stonebraker. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "3": { | |
| "title": "Reflections on How Designers Design with Data. In Proceedings of the International Working Conference on Advanced Visual Interfaces. 17\u201324.", | |
| "author": "Alex Bigelow, Steven Drucker, Danyel Fisher, and Miriah Meyer. 2014.", | |
| "venue": "https://doi.org/10.1145/2598153.2598175", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "4": { | |
| "title": "Glyph-Based Visualization: Foundations, Design Guidelines, Techniques and Applications.. In Proceedings of Eurographics Conference on Visualization (State of the Art Reports). 39\u201363.", | |
| "author": "Rita Borgo, Johannes Kehrer, David HS Chung, Eamonn Maguire, Robert S Laramee, Helwig Hauser, Matthew Ward, and Min Chen. 2013.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "5": { | |
| "title": "What Makes a Visualization Memorable?", | |
| "author": "Michelle A. Borkin, Azalea A. Vo, Zoya Bylinskii, Phillip Isola, Shashank Sunkavalli, Aude Oliva, and Hanspeter Pfister. 2013.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 19, 12 (2013), 2306\u20132315.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "6": { | |
| "title": "D Data-Driven Documents.", | |
| "author": "Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 17, 12 (2011), 2301\u20132309.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "7": { | |
| "title": "A Multi-Level Typology of Abstract Visualization Tasks.", | |
| "author": "Matthew Brehmer and Tamara Munzner. 2013.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 19, 12 (2013), 2376\u20132385.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "8": { | |
| "title": "MIG-Viewer: Visual Analytics of Soccer Player Migration.", | |
| "author": "Anqi Cao, Xiao Xie, Ji Lan, Huihua Lu, Xinli Hou, Jiachen Wang, Hui Zhang, Dongyu Liu, and Yingcai Wu. 2021.", | |
| "venue": "Visual Informatics 5, 3 (2021).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "9": { | |
| "title": "Voila: Visual Anomaly Detection and Monitoring with Streaming Spatiotemporal Data.", | |
| "author": "Nan Cao, Chaoguang Lin, Qiuhan Zhu, Yu-Ru Lin, Xian Teng, and Xidao Wen. 2018.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 23\u201333.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "10": { | |
| "title": "Readings in Information Visualization: Using Vision to Think.", | |
| "author": "Stuart K. Card, Jock D. Mackinlay, and Ben Shneiderman. 1999.", | |
| "venue": "Academic Press.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "11": { | |
| "title": "VAST Challenge 2022.", | |
| "author": "VAST Challenge Committee Chairs. 2022.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "12": { | |
| "title": "VIS30K: A Collection of Figures and Tables from IEEE Visualization Conference Publications.", | |
| "author": "Jian Chen, Meng Ling, Rui Li, Petra Isenberg, Tobias Isenberg, Michael Sedlmair, Torsten M\u00f6ller, Robert S. Laramee, Han-Wei Shen, Katharina W\u00fcnsche, and Qiru Wang. 2021a.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 27, 9 (2021), 3826\u20133833.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "13": { | |
| "title": "Composition and Configuration Patterns in Multiple-View Visualizations.", | |
| "author": "Xi Chen, Wei Zeng, Yanna Lin, Hayder Mahdi AI-maneea, Jonathan Roberts, and Remco Chang. 2021b.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 1514\u20131524.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "14": { | |
| "title": "A Taxonomy of Visualization Techniques Using the Data State Reference Model. In IEEE Symposium on Information Visualization. 69\u201375.", | |
| "author": "Ed Huai-hsin Chi. 2000.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "15": { | |
| "title": "Revisiting the Design Patterns of Composite Visualizations.", | |
| "author": "Dazhen Deng, Weiwei Cui, Xiyu Meng, Mengye Xu, Yu Liao, Haidong Zhang, and Yingcai Wu. 2023a.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 29, 12 (2023), 5406\u20135421.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "16": { | |
| "title": "DashBot: Insight-Driven Dashboard Generation Based on Deep Reinforcement Learning.", | |
| "author": "Dazhen Deng, Aoyu Wu, Huamin Qu, and Yingcai Wu. 2022.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 29, 1 (2022), 690\u2013700.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "17": { | |
| "title": "VisImages: A Fine-Grained Expert-Annotated Visualization Dataset.", | |
| "author": "Dazhen Deng, Yihong Wu, Xinhuan Shu, Jiang Wu, Siwei Fu, Weiwei Cui, and Yingcai Wu. 2023c.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 29, 7 (2023), 3298\u20133311.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "18": { | |
| "title": "A Survey of Urban Visual Analytics: Advances and Future Directions.", | |
| "author": "Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, and Yingcai Wu. 2023b.", | |
| "venue": "Computational Visual Media 9, 1 (2023), 3\u201339.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "19": { | |
| "title": "Hierarchical Aggregation for Information Visualization: Overview, Techniques, and Design Guidelines.", | |
| "author": "Niklas Elmqvist and Jean-Daniel Fekete. 2010.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 16, 3 (2010), 439\u2013454.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "20": { | |
| "title": "A Framework for Analyzing and Designing Diagrams and Graphics. In Proceedings of International Conference on Theory and Application of Diagrams. 201\u2013209.", | |
| "author": "Yuri Engelhardt and Clive Richards. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "21": { | |
| "title": "Visual Analysis of MOOC Forums with iForum.", | |
| "author": "Siwei Fu, Jian Zhao, Weiwei Cui, and Huamin Qu. 2017.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 201\u2013210.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "22": { | |
| "title": "VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detectio.", | |
| "author": "Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar Shekar, Axel Wendt, and Liu Ren. 2021.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 261\u2013271.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "23": { | |
| "title": "Information Graphics: A Comprehensive Illustrated Reference.", | |
| "author": "Robert L Harris. 1999.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "24": { | |
| "title": "Declarative Language Design for Interactive Visualization.", | |
| "author": "Jeffrey Heer and Michael Bostock. 2010.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 16, 6 (2010), 1149\u20131156.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "25": { | |
| "title": "Getting Inspired!: Understanding How and Why Examples Are Used in Creative Design Practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 87\u201396.", | |
| "author": "Scarlett R. Herring, Chia-Chen Chang, Jesse Krantzler, and Brian P. Bailey. 2009.", | |
| "venue": "https://doi.org/10.1145/1518701.1518717", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "26": { | |
| "title": "Searching the Visual Style and Structure of D3 Visualizations.", | |
| "author": "Enamul Hoque and Maneesh Agrawala. 2020.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 1236\u20131245.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "27": { | |
| "title": "Viznet: Towards a Large-Scale Visualization Learning and Benchmarking Repository. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1\u201312.", | |
| "author": "Kevin Hu, Snehalkumar\u2019Neil\u2019S Gaikwad, Madelon Hulsebos, Michiel A Bakker, Emanuel Zgraggen, C\u00e9sar Hidalgo, Tim Kraska, Guoliang Li, Arvind Satyanarayan, and \u00c7a\u011fatay Demiralp. 2019.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "28": { | |
| "title": "Exploring the Design Space of Composite Visualization. In Proceedings of IEEE Pacific Visualization Symposium. 1\u20138.", | |
| "author": "Waqas Javed and Niklas Elmqvist. 2012.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "29": { | |
| "title": "Visual Analytics: Definition, Process, and Challenges.", | |
| "author": "Daniel Keim, Gennady Andrienko, Jean-Daniel Fekete, Carsten G\u00f6rg, J\u00f6rn Kohlhammer, and Guy Melan\u00e7on. 2008a.", | |
| "venue": "In Information Visualization. 154\u2013175.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "30": { | |
| "title": "Visual Analytics: Scope and Challenges.", | |
| "author": "Daniel A. Keim, Florian Mansmann, J\u00f6rn Schneidewind, Jim Thomas, and Hartmut Ziegler. 2008b.", | |
| "venue": "In Proceedings of Visual Data Mining: Theory, Techniques and Tools for Visual Analytics. Lecture Notes in Computer Science, Vol. 4404. 76\u201390.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "31": { | |
| "title": "Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data.", | |
| "author": "Robert Krueger, Johanna Beyer, Won-Dong Jang, Nam Wook Kim, Artem Sokolov, Peter K. Sorger, and Hanspeter Pfister. 2020.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 227\u2013237.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "32": { | |
| "title": "A Survey of Visual Analytics Techniques for Online Education.", | |
| "author": "Xiaoyan Kui, Naiming Liu, Qiang Liu, Jingwei Liu, Xiaoqian Zeng, and Chao Zhang. 2022.", | |
| "venue": "Visual Informatics 6, 4 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "33": { | |
| "title": "Affective Visualization Design: Leveraging the Emotional Impact of Data.", | |
| "author": "Xingyu Lan, Yanqiu Wu, and Nan Cao. 2023.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics (2023), 1\u201311.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "34": { | |
| "title": "Task Taxonomy for Graph Visualization. In Proceedings of AVI Workshop on BEyond Time and Errors: Novel Evaluation Methods for Information Visualization. 1\u20135.", | |
| "author": "Bongshin Lee, Catherine Plaisant, Cynthia Sims Parr, Jean-Daniel Fekete, and Nathalie Henry. 2006.", | |
| "venue": "https://doi.org/10.1145/1168149.1168168", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "35": { | |
| "title": "Designing with Interactive Example Galleries. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2257\u20132266.", | |
| "author": "Brian Lee, Savil Srivastava, Ranjitha Kumar, Ronen Brafman, and Scott R. Klemmer. 2010.", | |
| "venue": "https://doi.org/10.1145/1753326.1753667", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "36": { | |
| "title": "HiPiler: Visual Exploration of Large Genome Interaction Matrices with Interactive Small Multiples.", | |
| "author": "Fritz Lekschas, Benjamin Bach, Peter Kerpedjiev, Nils Gehlenborg, and Hanspeter Pfister. 2018.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 522\u2013531.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "37": { | |
| "title": "ECharts: A Declarative Framework for Rapid Construction of Web-Based Visualization.", | |
| "author": "Deqing Li, Honghui Mei, Yi Shen, Shuang Su, Wenli Zhang, Junting Wang, Ming Zu, and Wei Chen. 2018.", | |
| "venue": "Visual Informatics 2, 2 (2018), 136\u2013146.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "38": { | |
| "title": "Structure-Aware Visualization Retrieval. In Proceedings of CHI Conference on Human Factors in Computing Systems.", | |
| "author": "Haotian Li, Yong Wang, Aoyu Wu, Huan Wei, and Huamin Qu. 2022.", | |
| "venue": "https://doi.org/10.1145/3491102.3502048", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "39": { | |
| "title": "TPFlow: Progressive Partition and Multidimensional Pattern Extraction for Large-Scale Spatio-Temporal Data Analysis.", | |
| "author": "Dongyu Liu, Panpan Xu, and Liu Ren. 2019.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 1\u201311.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "40": { | |
| "title": "Classification with Invariant Scattering Representations.", | |
| "author": "Gerald L Lohse, Kevin Biolsi, Neff Walker, and Henry H Rueter. 1994.", | |
| "venue": "Commun. ACM 37, 12 (1994), 36\u201350.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "41": { | |
| "title": "WaterExcVA: A System for Exploring and Visualizing Data Exception in Urban Water Supply.", | |
| "author": "Qiang Lu, Yifan Ge, Jingang Rao, Liang Ling, Ye Yu, and Zhenya Zhang. 2023.", | |
| "venue": "Journal of Visualization 26, 4 (2023).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "42": { | |
| "title": "Comparative Layouts Revisited: Design Space, Guidelines, and Future Directions.", | |
| "author": "Sehi LYi, Jaemin Jo, and Jinwook Seo. 2021.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 1525\u20131535.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "43": { | |
| "title": "Automating the Design of Graphical Presentations of Relational Information.", | |
| "author": "Jock Mackinlay. 1986.", | |
| "venue": "ACM Transactions on Graphics 5, 2 (1986), 110\u2013141.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "44": { | |
| "title": "Design for Information: An Introduction to the Histories, Theories, and Best Practices behind Effective Visualizations.", | |
| "author": "Isabel Meirelles. 2013.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "45": { | |
| "title": "Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR).", | |
| "author": "Jamshed Memon, Maira Sami, Rizwan Ahmed Khan, and Mueen Uddin. 2020.", | |
| "venue": "IEEE access : practical innovations, open solutions 8 (2020), 142642\u2013142668.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "46": { | |
| "title": "A Nested Model for Visualization Design and Validation.", | |
| "author": "Tamara Munzner. 2009.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 15, 6 (2009), 921\u2013928.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "47": { | |
| "title": "Visualization Analysis and Design.", | |
| "author": "Tamara Munzner. 2014.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "48": { | |
| "title": "PipelineProfiler: A Visual Analytics Tool for the Exploration of AutoML Pipelines.", | |
| "author": "Jorge Piazentin Ono, Sonia Castelo, Roque Lopez, Enrico Bertini, Juliana Freire, and Claudio Silva. 2021.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 390\u2013400.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "49": { | |
| "title": "Atom: A Grammar for Unit Visualizations.", | |
| "author": "Deokgun Park, Steven M Drucker, Roland Fernandez, and Niklas Elmqvist. 2017.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 24, 12 (2017), 3032\u20133043.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "50": { | |
| "title": "Reverse-Engineering Visualizations: Recovering Visual Encodings from Chart Images.", | |
| "author": "Jorge Poco and Jeffrey Heer. 2017.", | |
| "venue": "Computer Graphics Forum 36, 3 (2017), 353\u2013363.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "51": { | |
| "title": "Extracting and Retargeting Color Mappings from Bitmap Images of Visualizations.", | |
| "author": "Jorge Poco, Angela Mayhua, and Jeffrey Heer. 2018.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 637\u2013646.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "52": { | |
| "title": "ChartAccent: Annotation for Data-Driven Storytelling. In IEEE Pacific Visualization Symposium. 230\u2013239.", | |
| "author": "Donghao Ren, Matthew Brehmer, Bongshin Lee, Tobias Hollerer, and Eun Kyoung Choe. 2017.", | |
| "venue": "https://doi.org/10.1109/PACIFICVIS.2017.8031599", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "53": { | |
| "title": "Knowledge Generation Model for Visual Analytics.", | |
| "author": "Dominik Sacha, Andreas Stoffel, Florian Stoffel, Bum Chul Kwon, Geoffrey Ellis, and Daniel A. Keim. 2014.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 1604\u20131613.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "54": { | |
| "title": "Vega-Lite: A Grammar of Interactive Graphics.", | |
| "author": "Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2017.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 341\u2013350.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "55": { | |
| "title": "ReVision: Automated Classification, Analysis and Redesign of Chart Images. In In Proceedings of Annual ACM Symposium on User Interface Software and Technology. 393\u2013402.", | |
| "author": "Manolis Savva, Nicholas Kong, Arti Chhajta, Li Fei-Fei, Maneesh Agrawala, and Jeffrey Heer. 2011.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "56": { | |
| "title": "Reverse-Engineering Information Presentations: Recovering Hierarchical Grouping from Layouts of Visual Elements.", | |
| "author": "Danqing Shi, Weiwei Cui, Danqing Huang, Haidong Zhang, and Nan Cao. 2022.", | |
| "venue": "CoRR abs/2201.05194 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "57": { | |
| "title": "De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1\u201319.", | |
| "author": "Xinyu Shi, Ziqi Zhou, Jing Wen Zhang, Ali Neshati, Anjul Kumar Tyagi, Ryan Rossi, Shunan Guo, Fan Du, and Jian Zhao. 2023.", | |
| "venue": "https://doi.org/10.1145/3544548.3581070", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "58": { | |
| "title": "Creativity Support Tools.", | |
| "author": "Ben Shneiderman. 2002.", | |
| "venue": "Commun. ACM 45, 10 (2002), 116\u2013120.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "59": { | |
| "title": "Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication.", | |
| "author": "Arjun Srinivasan, Steven M Drucker, Alex Endert, and John Stasko. 2018.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 25, 1 (2018), 672\u2013681.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "60": { | |
| "title": "Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.", | |
| "author": "Manuel Stein, Halldor Janetzko, Andreas Lamprecht, Thorsten Breitkreutz, Philipp Zimmermann, Bastian Goldl\u00fccke, Tobias Schreck, Gennady Andrienko, Michael Grossniklaus, and Daniel A. Keim. 2018.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 13\u201322.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "61": { | |
| "title": "Designing Progressive and Interactive Analytics Processes for High-Dimensional Data Analysis.", | |
| "author": "Cagatay Turkay, Erdem Kaya, Selim Balcisoy, and Helwig Hauser. 2016.", | |
| "venue": "IEEE transactions on visualization and computer graphics 23, 1 (2016), 131\u2013140.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "62": { | |
| "title": "ManyEyes: A Site for Visualization at Internet Scale.", | |
| "author": "Fernanda B. Viegas, Martin Wattenberg, Frank van Ham, Jesse Kriss, and Matt McKeon. 2007.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 13, 6 (2007), 1121\u20131128.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "63": { | |
| "title": "The Grammar of Graphics: The Ggplot2 Package.", | |
| "author": "Leland Wilkinson. 2012.", | |
| "venue": "In Handbook of Computational Statistics. 375\u2013414.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "64": { | |
| "title": "In Defence of Visual Analytics Systems: Replies to Critics.", | |
| "author": "Aoyu Wu, Dazhen Deng, Furui Cheng, Yingcai Wu, Shixia Liu, and Huamin Qu. 2023.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 1026\u20131036.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "65": { | |
| "title": "AI4VIS: Survey on Artificial Intelligence Approaches for Data Visualization.", | |
| "author": "Aoyu Wu, Yun Wang, Xinhuan Shu, Dominik Moritz, Weiwei Cui, Haidong Zhang, Dongmei Zhang, and Huamin Qu. 2022.", | |
| "venue": ", 5049\u20135070 pages.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "66": { | |
| "title": "VISAtlas: An Image-Based Exploration and Query System for Large Visualization Collections via Neural Image Embedding.", | |
| "author": "Yilin Ye, Rong Huang, and Wei Zeng. 2022.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics (2022), 1\u201315.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "67": { | |
| "title": "MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization.", | |
| "author": "Lu Ying, Xinhuan Shu, Dazhen Deng, Yuchen Yang, Tan Tang, Lingyun Yu, and Yingcai Wu. 2023a.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 331\u2013341.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "68": { | |
| "title": "GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs.", | |
| "author": "Lu Ying, Tan Tangl, Yuzhe Luo, Lvkeshen Shen, Xiao Xie, Lingyun Yu, and Yingcai Wu. 2022.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 28, 1 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "69": { | |
| "title": "Reviving Static Charts into Live Charts.", | |
| "author": "Lu Ying, Yun Wang, Haotian Li, Shuguang Dou, Haidong Zhang, Xinyang Jiang, Huamin Qu, and Yingcai Wu. 2023b.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "70": { | |
| "title": "Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data Based on User-Authored Annotations.", | |
| "author": "Jian Zhao, Michael Glueck, Simon Breslav, Fanny Chevalier, and Azam Khan. 2017.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 23, 1 (2017), 261\u2013270.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "71": { | |
| "title": "An Intelligent Approach to Automatically Discovering Visual Insights.", | |
| "author": "Yuhua Zhou, Xiyu Meng, Yanhong Wu, Tan Tang, Yongheng Wang, and Yingcai Wu. 2023.", | |
| "venue": "Journal of Visualization 26, 3 (2023).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "72": { | |
| "title": "Visual Abstraction of Large Scale Geospatial Origin-Destination Movement Data.", | |
| "author": "Zhiguang Zhou, Linhao Meng, Cheng Tang, Ying Zhao, Zhiyong Guo, Miaoxin Hu, and Wei Chen. 2019.", | |
| "venue": "IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 43\u201353.", | |
| "url": null | |
| } | |
| } | |
| ], | |
| "url": "http://arxiv.org/html/2211.02567v2" | |
| } |