FLUX-Makeup Collection We propose FLUX-Makeup, a high-fidelity, identityconsistent, and robust makeup transfer framework that eliminates the need for any auxiliary face-cont • 2 items • Updated about 5 hours ago • 1
view post Post 94 LongCat-Flash-Lite🔥 a non-thinking MoE model released by Meituan LongCat team. meituan-longcat/LongCat-Flash-Lite✨ Total 68.5B / 3B active - MIT license✨ 256k context ✨ Faster inference with N-gram embeddings See translation Reply
view post Post 1324 You can now run Kimi K2.5 locally! 🔥We shrank the 1T model to 240GB (-60%) via Dynamic 1-bit.Get >40 tok/s on 242GB or 622GB VRAM/RAM for near full precision.GGUF: unsloth/Kimi-K2.5-GGUFGuide: https://unsloth.ai/docs/models/kimi-k2.5 See translation 🚀 8 8 😎 3 3 👀 1 1 + Reply
view post Post 126 Ant Group is going big on robotics 🤖They just dropped their first VLA and depth perception foundation model on huggingface. ✨ LingBot-VLA : - Trained on 20k hours of real-world robot data- 9 robot embodiments- Clear no-saturation scaling laws- Apache 2.0Model: https://huggingface.co/collections/robbyant/lingbot-vlaPaper: A Pragmatic VLA Foundation Model (2601.18692)✨ LingBot-Depth:- Metric-accurate 3D from noisy, incomplete depth- Masked Depth Modeling (self-supervised)- RGB–depth alignment, works with <5% sparse depth- Apache 2.0Model: https://huggingface.co/collections/robbyant/lingbot-depthPaper: Masked Depth Modeling for Spatial Perception (2601.17895) See translation Reply