Simple is Better and Large is Not Enough: Towards Ensembling of Foundational Language Models Paper • 2308.12272 • Published Aug 23, 2023
Leveraging Knowledge and Reinforcement Learning for Enhanced Reliability of Language Models Paper • 2308.13467 • Published Aug 25, 2023
Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation Paper • 2412.16135 • Published Dec 20, 2024
Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context Paper • 2412.16359 • Published Dec 20, 2024