Update README.md
#75 opened 18 days ago
by
peterpeter8585
Trying to 'Use this model', getting: ' The requested model 'HuggingFaceH4/zephyr-7b-beta' is not supported by any provider you have enabled'
π
1
1
#74 opened about 2 months ago
by
OnurKerimoglu
For restart endpoint
β
1
#73 opened 4 months ago
by
anant5552
Model's End Point paused
4
#72 opened 4 months ago
by
stark0p
Potential Inconsistencies Model and Base Model License
#71 opened 5 months ago
by
yueyangchen
function calling
#69 opened 7 months ago
by
jeeltcraft
Zephyr maximum token limit
#68 opened 7 months ago
by
animesh1998
Getting NULL response using curl or PHP
#67 opened 8 months ago
by
ajmerphull
Adding Evaluation Results
#66 opened 9 months ago
by
f0rGoTTen000
Update README.md
#65 opened 12 months ago
by
karrrr123456
Some no safe political output for China mainland network use.
#64 opened about 1 year ago
by
Sakura12546
My_duplictemodel
#63 opened over 1 year ago
by
leolaish
Model is not generating an answer or it takes a really long time
1
#62 opened over 1 year ago
by
polycaman
Update README.md
#61 opened over 1 year ago
by
MasonDixon2711
π©Still receiving 'Fetch Failed' Error
2
#60 opened over 1 year ago
by
Johnbigginsman
π© Report: Chat Not Working
π
π
6
4
#59 opened over 1 year ago
by
redformurder
Internal Server Error
#58 opened over 1 year ago
by
Rhea0000
Demo different from API
1
#57 opened over 1 year ago
by
businesspig1
Nodejs?Nextjs streaming
#56 opened over 1 year ago
by
dreyyy
Zephyr is off
#55 opened over 1 year ago
by
yinbtologie
System prompts and settings from HF's Zephyr 7b-beta?
1
#53 opened over 1 year ago
by
ParanoidPosition
Truncating Response
π€
1
2
#52 opened almost 2 years ago
by
Mostafaadel174
getting CUDA out of memory
5
#51 opened almost 2 years ago
by
allpunks
Weird Responses?
#50 opened almost 2 years ago
by
TheAGames10
zephyr-7b-beta with VLLM
1
#49 opened almost 2 years ago
by
D3v
100k is converted into $100,00
#48 opened almost 2 years ago
by
wehapi
response is weird
1
#47 opened almost 2 years ago
by
wehapi
Taking way to long to generate a response
β€οΈ
π
4
2
#46 opened almost 2 years ago
by
Idkkitsune
Answering in Spanish
2
#45 opened almost 2 years ago
by
whoami02
Update Ruined Inference
#44 opened almost 2 years ago
by
orick96
Error Message when the number of input tokens exceeds 2000. I am using ml.g4dn.8xlarge instance (128 GiB).
#43 opened almost 2 years ago
by
YWDallas
What EC2 configuration/instance should I use?
#41 opened almost 2 years ago
by
rikomi7571
Zephyr hallucinations with conversational memory
2
#39 opened almost 2 years ago
by
lfoppiano
Context length?
2
#38 opened almost 2 years ago
by
austinmw
[AUTOMATED] Model Memory Requirements
β€οΈ
1
#37 opened almost 2 years ago
by
model-sizer-bot
What's the difference between zephyr-7b-beta and zephyr-7b-alpha?
1
#36 opened almost 2 years ago
by
haha-point
[AUTOMATED] Model Memory Requirements
3
#35 opened almost 2 years ago
by
model-sizer-bot
Can zephyr-7b support YARN 128K context window ?
π
3
#33 opened almost 2 years ago
by
tim9510019
Why isn't the `model_max_length` set to 2048?
1
#32 opened almost 2 years ago
by
alvarobartt
Did the LoRa finetuned model end up performing the same compared to full-finetuning?
1
#30 opened almost 2 years ago
by
timlim123
How do I achieve streaming output
2
#29 opened almost 2 years ago
by
wengnews
BFloat16 is not supported on MPS
2
#27 opened almost 2 years ago
by
mhelmy
Optimize Response Length and Quality
π
1
#26 opened almost 2 years ago
by
stargazer09
Add widget examples
#25 opened almost 2 years ago
by
mishig
Understand reward metrics
π
1
1
#22 opened almost 2 years ago
by
NhatHoang2002
Question on License given use of Ultrachat
π
1
1
#21 opened almost 2 years ago
by
RonanMcGovern
Very Nice Work, But It Can't Be Prompted To Tell Stories
#19 opened almost 2 years ago
by
deleted
Why not use the Plackett-Luce Model version of DPO when K=4 ranked responses are present?
π
5
#18 opened almost 2 years ago
by
MasterGodzilla
Long Context Sucessor?
#17 opened almost 2 years ago
by
brucethemoose