MusicFM

>>> import torch
>>> from transformers import AutoModel
>>> torch.manual_seed(0)
>>> wav = (torch.rand(4, 24000 * 30) - 0.5) * 2
>>> model = AutoModel.from_pretrained("tky823/MusicFM",  trust_remote_code=True)
>>> # get embeddings
>>> model.eval()
>>> emb = model.get_latent(wav, layer_ix=7)
>>> emb.size()
torch.Size([4, 750, 1024])

Ackowledgement

Most of the source code in this repository is based on code from @minzwon's repository https://github.com/minzwon/musicfm. We gratefully acknowledge their excellent work.

Downloads last month
12
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support