new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Nov 27

Deep view of the intracluster light in the Coma cluster of galaxies

Detection and study of the intracluster light in rich clusters of galaxies has been a problem of long standing challenge and interest. Using the lowest surface brightness images of the Coma cluster of galaxies in the g and r bands, from the Halos and Environment of Nearby Galaxies (HERON) Coma Cluster Project, we obtained the most extensive image of intracluster light (ICL) in a single cluster to date, spreading over 1.5 Mpc from the cluster core. The unprecedented wealth of spectroscopic data made publicly available by the Dark Energy Spectroscopic Instrument (DESI) Early Data Release, complemented with a compilation from the NASA/IPAC Extragalactic Database and the literature, enabled the identification of 2,157 galaxy members within Coma, from which 42 distinct groups were identified. The synergy between these high-quality data allowed us to: 1) calculate ICL fractions of 19.9pm0.5\% and 19.6pm0.6\% in the g and r bands, respectively, consistent with a dynamically active cluster, 2) unveil Coma's faintest tidal features, and 3) provide a comprehensive picture of the dynamics and interactions within this complex system. Our findings indicate that the ICL connects several of these groups in a filamentous network, from which we infer the ongoing dynamical processes. In particular, we identified a faint stellar bridge linking the core of Coma with the galaxy NGC 4839, providing compelling evidence that this galaxy has already traversed the central region of the cluster.

  • 9 authors
·
Dec 19, 2024

True Multimodal In-Context Learning Needs Attention to the Visual Context

Multimodal Large Language Models (MLLMs), built on powerful language backbones, have enabled Multimodal In-Context Learning (MICL)-adapting to new tasks from a few multimodal demonstrations consisting of images, questions, and answers. Despite showing noticeable improvement on standard vision-language datasets, current MLLMs struggle to leverage visual information in the demonstrations. Specifically, they tend to neglect visual cues and over-rely on textual patterns, leading to mere text imitation rather than genuine multimodal adaptation. This behavior makes MICL still unimodal and largely restricts its practical utility. More importantly, this limitation is often concealed by the improved performance on tasks that do not require understanding the visual context. As a result, how to effectively enhance MICL ability and reliably evaluate the MICL performance remains underexplored. To address these issues, we first introduce Dynamic Attention Reallocation (DARA), an efficient fine-tuning strategy that encourages models to attend to the visual context by rebalancing attention across visual and textual tokens. In addition, we present TrueMICL, an MICL-dedicated dataset with both support and test sets that explicitly requires the integration of multimodal information-particularly visual content-for correct task completion. Extensive experiments demonstrate the effectiveness of our holistic solution, showcasing substantial improvements in the true multimodal in-context learning capabilities. Code and datasets are available at https://chenxshuo.github.io/true-micl-colm .

  • 8 authors
·
Jul 21 2