runtime error
Exit code: 1. Reason: █████████| 456k/456k [00:00<00:00, 103MB/s] tokenizer.json: 0%| | 0.00/3.56M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 3.56M/3.56M [00:00<00:00, 40.5MB/s] added_tokens.json: 0%| | 0.00/222 [00:00<?, ?B/s][A added_tokens.json: 100%|██████████| 222/222 [00:00<00:00, 1.06MB/s] special_tokens_map.json: 0%| | 0.00/1.66k [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 1.66k/1.66k [00:00<00:00, 8.99MB/s] config.json: 0%| | 0.00/820 [00:00<?, ?B/s][A config.json: 100%|██████████| 820/820 [00:00<00:00, 4.04MB/s] zeus_mm.py: 0%| | 0.00/25.2k [00:00<?, ?B/s][A zeus_mm.py: 100%|██████████| 25.2k/25.2k [00:00<00:00, 72.1MB/s] A new version of the following files was downloaded from https://huggingface.co/Wonder-Griffin/ZeusMM-SFT-oasst1: - zeus_mm.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. model.safetensors: 0%| | 0.00/407M [00:00<?, ?B/s][A model.safetensors: 5%|▌ | 21.0M/407M [00:02<00:45, 8.49MB/s][A model.safetensors: 51%|█████ | 208M/407M [00:03<00:02, 73.1MB/s] [A model.safetensors: 100%|█████████▉| 407M/407M [00:03<00:00, 106MB/s] Missing keys: ['embed_tokens.weight'] Unexpected keys: [] /usr/local/lib/python3.10/site-packages/gradio/components/chatbot.py:228: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys. warnings.warn( Traceback (most recent call last): File "/home/user/app/app.py", line 188, in <module> demo.queue(max_size=32, concurrency_count=1).launch(server_name="0.0.0.0", server_port=7860) TypeError: Blocks.queue() got an unexpected keyword argument 'concurrency_count'
Container logs:
Fetching error logs...