The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
LAION eV
non-profit
AI & ML interests
datasets, computer vision
Recent Activity
View all activity
Releases related to Open-ψ (Open-Sci) Collective
Re-LAION-5B-research
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 114k • 122 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 46.3k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 25.7k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 595 • 1
CLAP is to audio what CLIP is to image.
openMaMMUT models trained on DataComp-1.4B
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 19 • 4 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 7 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 24
Re-LAION-5B research safe
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 88.5k • 298 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 9.3k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 60k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 900k • 424
The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
Releases related to Open-ψ (Open-Sci) Collective
openMaMMUT models trained on DataComp-1.4B
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 19 • 4 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 7 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 24
Re-LAION-5B-research
Re-LAION-5B research safe
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 114k • 122 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 46.3k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 25.7k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 595 • 1
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 88.5k • 298 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 9.3k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 60k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 900k • 424
CLAP is to audio what CLIP is to image.