OpenJAI-v1.0

OpenJAI-v1.0 is an open-source Large Language Model from Jasmine Technology Solution (JTS), designed for high performance in both Thai and English. Using the powerful Qwen3-14B as its foundation, our work focused on enhancing its capabilities for practical applications through meticulous data curation in three key domains: instruction following, long-context understanding, and tool calling.

Comprehensive evaluation results demonstrate that OpenJAI-v1.0 improves upon its base model and outperforms other leading open-source Thai models of comparable size across a diverse suite of benchmarks. Crucially, these gains were achieved without significant degradation of the model's foundational knowledge.

For a complete overview of our dataset, methodology, and benchmarks, please refer to our paper: OpenJAI-v1.0: An Open Thai Large Language Model.

OpenJAI-v1.0 Highlights

  • Thai-Centric Excellence: Specifically finetuned to achieve state-of-the-art performance in both Thai and English.
  • Enhanced Practical Skills: Built on the robust Qwen3-14B, OpenJAI-v1.0 excels in:
    • Complex instruction following
    • Long-context understanding (up to 120,000 tokens)
    • Reliable tool and function calling
  • Top-Tier Performance: Outperforms its base model and other leading open-source Thai models of comparable size across a diverse set of benchmarks.
  • Knowledge Retention: Finetuning enhancements were achieved without significant degradation of the base model's core knowledge, avoiding catastrophic forgetting.
  • Fully Open-Source: OpenJAI-v1.0 is publicly released to foster research and application development within the Thai AI community.

Model Performance

Benchmark/Model OpenJAI-v1.0-14b Qwen3-14b Typhoon2.1-gemma3-12b OpenThaiGPT1.5-14b GPT-4.1-nano-2025-04-14
Instruction Following
IFBench-EN 32.4 29.7 27.4 30.6 28.3
IFBench-TH 39.4 38.1 36.5 35.4 34.9
Multi-turn Capability
MT-Bench-EN 8.4 8.4 8.3 7.8 8.5
MT-Bench-TH 8.1 8.0 8.1 6.9 8.0
Long-context Understanding
MRCR 18.9 18.3 16.9 16.9 16.2
LongBench-v2 33.6 32.4 29.2 33.6 28.8
Tool Calling
BFCL-v3-EN 60.5 59.2 52.2 52.9 53.1
BFCL-v3-TH 47.0 46.0 45.0 44.9 41.1
General Knowledge
MMLU-ProX-lite-EN 66.0 66.6 55.1 64.3 36.3
MMLU-ProX-lite-TH 54.7 57.5 45.2 49.3 39.8

Quickstart

To get started, we recommend using the latest version of the Hugging Face transformers library.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "JTS-AI/OpenJAI-v1.0-14B" 

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "แนะนำที่เที่ยวแถวสยามหน่อย"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 
content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")

print("Content:", content)

OpenJAI-v1.0 is optimized for non-thinking mode. While the base model's thinking mode may be accessible, its performance is not guaranteed.

Tool Calling / Agentic Use

OpenJAI-v1.0 has strong tool-calling capabilities. You can use frameworks like Qwen-Agent by adapting the model configuration.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

# Define LLM, pointing to your OpenJAI-v1.0 endpoint
llm_cfg = {
    'model': 'JTS-AI/OpenJAI-v1.0-14B',

    # Use a custom endpoint compatible with OpenAI API:
    'model_server': 'http://localhost:8000/v1',  # api_base
    'api_key': 'EMPTY',
}

# Define Tools
tools = [
    {'mcpServers': {  # You can specify the MCP configuration file
            'time': {
                'command': 'uvx',
                'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
            },
            "fetch": {
                "command": "uvx",
                "args": ["mcp-server-fetch"]
            }
        }
    },
  'code_interpreter',  # Built-in tools
]

# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)

# Streaming generation
messages = [{'role': 'user', 'content': 'วาดกราฟแสดงราคาหุ้นของ JTS ในช่วง 1 เดือนที่ผ่านมา'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

Processing Long Texts

OpenJAI-v1.0 was trained for robust performance on input context lengths up to 120,000 tokens. The model operates optimally within its native 32,768-token window. To process contexts exceeding this limit, applying a context extension technique like YaRN or Dynamic RoPE scaling is necessary.

Frameworks like vLLM and SGLang support passing command-line arguments to enable RoPE scaling.

For vllm, you can use

vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072  

For sglang, you can use

python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'

All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set factor as 2.0.

The default max_position_embeddings in config.json is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.

Citation

@misc{trakuekul2025openjaiv10openthailarge,
      title={OpenJAI-v1.0: An Open Thai Large Language Model}, 
      author={Pontakorn Trakuekul and Attapol T. Rutherford and Jullajak Karnjanaekarin and Narongkorn Panitsrisit and Sumana Sumanakul},
      year={2025},
      eprint={2510.06847},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.06847}, 
}
Downloads last month
353
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JTS-AI/OpenJAI-v1.0-14B

Finetuned
Qwen/Qwen3-14B
Finetuned
(130)
this model
Quantizations
1 model

Collection including JTS-AI/OpenJAI-v1.0-14B