runtime error

Exit code: 1. Reason: β–ˆβ–ˆβ–ˆ| 963/963 [00:00<00:00, 4.24MB/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 0%| | 0.00/651M [00:00<?, ?B/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 7%|β–‹ | 47.6M/651M [00:01<00:16, 37.3MB/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 651M/651M [00:02<00:00, 311MB/s] tokenizer.json: 0%| | 0.00/269k [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 269k/269k [00:00<00:00, 46.6MB/s] s1v3.ckpt: 0%| | 0.00/155M [00:00<?, ?B/s] s1v3.ckpt: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 88.2M/155M [00:01<00:01, 63.3MB/s] s1v3.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 155M/155M [00:01<00:00, 80.7MB/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 0%| | 0.00/108M [00:00<?, ?B/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108M/108M [00:01<00:00, 96.4MB/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108M/108M [00:01<00:00, 96.4MB/s] v2Pro/s2Gv2ProPlus.pth: 0%| | 0.00/200M [00:00<?, ?B/s] v2Pro/s2Gv2ProPlus.pth: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 66.0M/200M [00:01<00:02, 53.6MB/s] v2Pro/s2Gv2ProPlus.pth: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200M/200M [00:01<00:00, 133MB/s] <All keys matched successfully> Loading Text2Semantic Weights from pretrained_models/s1v3.ckpt with Flash Attn Implement Traceback (most recent call last): File "/home/user/app/inference_webui.py", line 236, in <module> change_gpt_weights("pretrained_models/s1v3.ckpt") File "/home/user/app/inference_webui.py", line 230, in change_gpt_weights CUDAGraphRunner.load_decoder(gpt_path), torch.device(device), torch.float16 if is_half else torch.float32 File "/home/user/app/AR/models/t2s_model_flash_attn.py", line 405, in load_decoder decoder: T2SDecoderABC = decoder_cls(config, max_batch_size=1) File "/home/user/app/AR/models/t2s_model_flash_attn.py", line 125, in __init__ assert torch.cuda.is_available() AssertionError

Container logs:

Fetching error logs...