nielsr HF Staff commited on
Commit
106b27d
·
verified ·
1 Parent(s): 34ea4ae

Add library name and links

Browse files

This PR adds the `library_name` to the metadata, specifying that this model uses the Diffusers library. It also adds links to the project page and the Github repository for easier access to the project details and code.

Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -1,25 +1,27 @@
1
  ---
2
- license: mit
3
- model_type: diffusers
4
- tags:
5
- - music
6
- - art
7
- - text-to-audio
8
- language:
9
- - en
10
  datasets:
11
  - seungheondoh/LP-MusicCaps-MSD
12
  - DynamicSuperb/MusicGenreClassification_FMA
13
  - DynamicSuperb/MARBLEMusicTagging_MagnaTagATune
14
  - agkphysics/AudioSet
 
 
 
15
  pipeline_tag: text-to-audio
 
 
 
 
 
 
16
  ---
17
 
18
  ## Model Description
19
 
20
- This model allows for easy setup and usage for generating music from text prompts.
21
 
22
  ## How to Use
23
 
24
- refer to : https://github.com/ivcylc/qa-mdt
25
- with huggingface space
 
 
1
  ---
 
 
 
 
 
 
 
 
2
  datasets:
3
  - seungheondoh/LP-MusicCaps-MSD
4
  - DynamicSuperb/MusicGenreClassification_FMA
5
  - DynamicSuperb/MARBLEMusicTagging_MagnaTagATune
6
  - agkphysics/AudioSet
7
+ language:
8
+ - en
9
+ license: mit
10
  pipeline_tag: text-to-audio
11
+ tags:
12
+ - music
13
+ - art
14
+ - text-to-audio
15
+ model_type: diffusers
16
+ library_name: diffusers
17
  ---
18
 
19
  ## Model Description
20
 
21
+ This model, QA-MDT, allows for easy setup and usage for generating music from text prompts. It incorporates a quality-aware training strategy to improve the fidelity of generated music.
22
 
23
  ## How to Use
24
 
25
+ A Hugging Face Diffusers implementation is available at [this model](https://huggingface.co/jadechoghari/openmusic) and [this space](https://huggingface.co/spaces/jadechoghari/OpenMusic). For more detailed instructions and the official PyTorch implementation, please refer to the project's [Github repository](https://github.com/ivcylc/qa-mdt) and [project page](https://qa-mdt.github.io/).
26
+
27
+ The model was presented in the paper [QA-MDT: Quality-aware Masked Diffusion Transformer for Enhanced Music Generation](https://huggingface.co/papers/2405.15863).