Text-to-SQL Model
			
	
	AI & ML interests
Benchmark, Code Generation, LLM
Recent Activity
	View all activity
	
		
		
		Open-source works to reproduce DeepSeek R1
			
	
	- 
	
	
	  perplexity-ai/r1-1776Text Generation • 671B • Updated • 2.08k • 2.32k
- 
	
	
	  unsloth/r1-1776-GGUFText Generation • 671B • Updated • 353 • 102
- 
	
	
	  unsloth/r1-1776-distill-llama-70b-unsloth-bnb-4bitText Generation • 38B • Updated • 7 • 2
- 
	
	
	  open-r1/OpenR1-Qwen-7BText Generation • 8B • Updated • 89 • • 54
For Finetuning. GPU is needed for both quantization and inference.
			
	
	Text-to-SQL model
			
	
	For inference. CPU is enough for both quantization and inference.
			
	
	- 
	
	
	  QuantFactory/OpenCoder-8B-Instruct-GGUFText Generation • 8B • Updated • 44 • 6
- 
	
	
	  QuantFactory/OpenCoder-8B-Base-GGUFText Generation • 8B • Updated • 58 • 3
- 
	
	
	  bartowski/starcoder2-15b-instruct-GGUFText Generation • 16B • Updated • 238 • 4
- 
	
	
	  QuantFactory/starcoder2-15b-GGUFText Generation • 16B • Updated • 46 • 2
Text-to-SQL Model
			
	
	Text-to-SQL model
			
	
	Open-source works to reproduce DeepSeek R1
			
	
	- 
	
	
	  perplexity-ai/r1-1776Text Generation • 671B • Updated • 2.08k • 2.32k
- 
	
	
	  unsloth/r1-1776-GGUFText Generation • 671B • Updated • 353 • 102
- 
	
	
	  unsloth/r1-1776-distill-llama-70b-unsloth-bnb-4bitText Generation • 38B • Updated • 7 • 2
- 
	
	
	  open-r1/OpenR1-Qwen-7BText Generation • 8B • Updated • 89 • • 54
For inference. CPU is enough for both quantization and inference.
			
	
	- 
	
	
	  QuantFactory/OpenCoder-8B-Instruct-GGUFText Generation • 8B • Updated • 44 • 6
- 
	
	
	  QuantFactory/OpenCoder-8B-Base-GGUFText Generation • 8B • Updated • 58 • 3
- 
	
	
	  bartowski/starcoder2-15b-instruct-GGUFText Generation • 16B • Updated • 238 • 4
- 
	
	
	  QuantFactory/starcoder2-15b-GGUFText Generation • 16B • Updated • 46 • 2
For Finetuning. GPU is needed for both quantization and inference.
			
	
	