Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning
Abstract
Screen2Words is a multi-modal approach for summarizing mobile screens into coherent language phrases, using deep models and a large-scale annotated dataset.
Mobile User Interface Summarization generates succinct language descriptions of mobile screens for conveying important contents and functionalities of the screen, which can be useful for many language-based application scenarios. We present Screen2Words, a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase. Summarizing mobile screens requires a holistic understanding of the multi-modal data of mobile UIs, including text, image, structures as well as UI semantics, motivating our multi-modal learning approach. We collected and analyzed a large-scale screen summarization dataset annotated by human workers. Our dataset contains more than 112k language summarization across sim22k unique UI screens. We then experimented with a set of deep models with different configurations. Our evaluation of these models with both automatic accuracy metrics and human rating shows that our approach can generate high-quality summaries for mobile screens. We demonstrate potential use cases of Screen2Words and open-source our dataset and model to lay the foundations for further bridging language and user interfaces.
Models citing this paper 165
Browse 165 models citing this paperDatasets citing this paper 2
Spaces citing this paper 79
Collections including this paper 0
No Collection including this paper