Post
61
What a wild, beautiful read!
> google: we are not done with the transformer yet.
> What a wild, beautiful read!
> google: we are not done with the transformer yet.
> they showed that an llm can in fact update its pre-trained weights based solely on patterns learned from a given prompt/context (in real-time with no training/finetuning required). This means real-time vanilla in-context learning/knowledge update 🤯
The transformer is so much more than we think:)
Paper: https://arxiv.org/pdf/2507.16003
> google: we are not done with the transformer yet.
> What a wild, beautiful read!
> google: we are not done with the transformer yet.
> they showed that an llm can in fact update its pre-trained weights based solely on patterns learned from a given prompt/context (in real-time with no training/finetuning required). This means real-time vanilla in-context learning/knowledge update 🤯
The transformer is so much more than we think:)
Paper: https://arxiv.org/pdf/2507.16003