Reka Flash 3.1
Reka Flash 3.1 is an update to our Reka Flash 3. It is particularly strong on coding and as a base model to be finetuned on agentic tasks.
Reka Flash 3.1 was post trained with synthetic and public datasets for supervised finetuning, followed by large-scale reinforcement learning (RLOO) with verifiable rewards. It improves by 10 points on LiveCodeBench v5 (Full set) from Reka Flash 3 due to significant advances in our reinforcement learning stack. For coding related tasks, Reka Flash 3.1 is competitive with models such as Qwen3-32B. o3-mini, and Gemini 2.5 Flash Thinking.
If you want to learn more about how we do RL for Reka Flash 3.1 that results in these improvements, please check out this post.
Try it out at our playground.
We also release a 3.5 bit quantized version of Reka Flash 3.1 and our quantization library to better support local deployment.
Quickstart
For ease of deployment, the model is released in a Llama-compatible format. You may use any library compatible with Llama to run the model.
Via Hugging Face
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("RekaAI/reka-flash-3.1")
model = transformers.AutoModelForCausalLM.from_pretrained("RekaAI/reka-flash-3.1", torch_dtype='auto', device_map='auto')
prompt = {"role": "human", "content": "Write a poem about large language model."}
text = tokenizer.apply_chat_template([prompt], tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**model_inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Via vLLM
docker run --rm -it --network=host --gpus '"device=0"' -v --shm-size=10.24gb vllm/vllm-openai:latest serve RekaAI/reka-flash-3.1 --dtype auto -tp 1
Model Details
Prompt Format
Reka Flash 3.1 uses cl100k_base tokenizer and adds no additional special tokens. Its prompt format is as follows:
human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...
Generation should stop on seeing the string <sep>
or seeing the special token <|endoftext|>
.
System prompt can be added by prepending to the first user round.
human: You are a friendly assistant blah ... this is round 1 user prompt <sep> assistant: this is round 1 response <sep> ...
For multi-round conversations, it is recommended to drop the reasoning traces in the previous assistant round to save tokens for the model to think. If you are using HF or vLLM, the built-in chat_template shall handle prompt formatting automatically.
Language Support
This model is primarily built for the English language, and you should consider this an English only model. However, the model is able to converse and understand other languages to some degree.
- Downloads last month
- 1,003