new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Dec 1

Tracing the Physical Lineage of GRB 211211A: Population Constraints on NS-WD Merger Gamma-Ray Bursts

The peculiar long gamma-ray burst (GRB) event, GRB 211211A, is known for it is association with a kilonova feature. Whereas most long GRBs are thought to originate in the core collapse of massive stars, the presence of kilonova suggests GRB 211211A was instead produced by a merger of a compact object binary. Building on the interpretation put forward by Yang2022Natur.612..232Y--who argue that GRB 211211A was powered by a massive white-dwarf + neutron-star (WD-NS) merger--we adopt this WD-NS scenario as our observationally supported starting point. If the burst truly originates from that channel, its rarity must mirror the formation and merger rate of WD-NS binaries--a rate still largely unexplored in conventional massive-binary population studies. In this letter, we present a qualitative analysis based on binary evolution physics in order to understand the fraction of GRB 211211A in short GRBs (NS-WD/NS-NS fraction). Since the progenitors of massive WD-NS binaries occupy the initial mass function-preferred regime, where the zero-age main-sequence mass range of the assumed WD mass range (1.2-1.4,M_odot) is comparable to that of NSs, the NS-WD/NS-NS fraction emerging from our standard evolutionary path is expected to be sim14--37\%, far higher than the observed fraction (sim5\%). This discrepancy might imply a large, still-unidentified population of GRB 211211A-like events or an unusual origin of the NS-such as being hypernova-born or accretion-induced-collapse-born. Placing these results in a broader compact-binary context, implications for black-hole systems are also discussed.

  • 4 authors
·
Aug 14

SuperCoder2.0: Technical Report on Exploring the feasibility of LLMs as Autonomous Programmer

We present SuperCoder2.0, an advanced autonomous system designed to enhance software development through artificial intelligence. The system combines an AI-native development approach with intelligent agents to enable fully autonomous coding. Key focus areas include a retry mechanism with error output traceback, comprehensive code rewriting and replacement using Abstract Syntax Tree (ast) parsing to minimize linting issues, code embedding technique for retrieval-augmented generation, and a focus on localizing methods for problem-solving rather than identifying specific line numbers. The methodology employs a three-step hierarchical search space reduction approach for code base navigation and bug localization:utilizing Retrieval Augmented Generation (RAG) and a Repository File Level Map to identify candidate files, (2) narrowing down to the most relevant files using a File Level Schematic Map, and (3) extracting 'relevant locations' within these files. Code editing is performed through a two-part module comprising CodeGeneration and CodeEditing, which generates multiple solutions at different temperature values and replaces entire methods or classes to maintain code integrity. A feedback loop executes repository-level test cases to validate and refine solutions. Experiments conducted on the SWE-bench Lite dataset demonstrate SuperCoder2.0's effectiveness, achieving correct file localization in 84.33% of cases within the top 5 candidates and successfully resolving 34% of test instances. This performance places SuperCoder2.0 fourth globally on the SWE-bench leaderboard. The system's ability to handle diverse repositories and problem types highlights its potential as a versatile tool for autonomous software development. Future work will focus on refining the code editing process and exploring advanced embedding models for improved natural language to code mapping.

  • 5 authors
·
Sep 17, 2024

AdaFortiTran: An Adaptive Transformer Model for Robust OFDM Channel Estimation

Deep learning models for channel estimation in Orthogonal Frequency Division Multiplexing (OFDM) systems often suffer from performance degradation under fast-fading channels and low-SNR scenarios. To address these limitations, we introduce the Adaptive Fortified Transformer (AdaFortiTran), a novel model specifically designed to enhance channel estimation in challenging environments. Our approach employs convolutional layers that exploit locality bias to capture strong correlations between neighboring channel elements, combined with a transformer encoder that applies the global Attention mechanism to channel patches. This approach effectively models both long-range dependencies and spectro-temporal interactions within single OFDM frames. We further augment the model's adaptability by integrating nonlinear representations of available channel statistics SNR, delay spread, and Doppler shift as priors. A residual connection is employed to merge global features from the transformer with local features from early convolutional processing, followed by final convolutional layers to refine the hierarchical channel representation. Despite its compact architecture, AdaFortiTran achieves up to 6 dB reduction in mean squared error (MSE) compared to state-of-the-art models. Tested across a wide range of Doppler shifts (200-1000 Hz), SNRs (0 to 25 dB), and delay spreads (50-300 ns), it demonstrates superior robustness in high-mobility environments.

  • 2 authors
·
May 13