This repository provides a knowledge-graph–oriented variant of Meta-Llama-3-8B-Instruct, fine-tuned with LoRA on the T-REx factual triple dataset as a triple extraction model.
The model takes natural language passages as input and outputs factual (head, relation, tail) triples in a structured JSON format, designed to be used as a knowledge graph backend for RAG systems (e.g., HippoRAG-style KG retrieval).
Training Details
Data
- Source: T-REx factual triple dataset
- Preprocessing (conceptual):
- T-REx triples
(subject, relation, object)were converted into simple natural language sentences. - The model was supervised to recover the triples in a structured JSON format:
{ "triples": [ ["New Bomb Turks", "P136", "garage punk"] ] } - Relations are expressed either as short phrases (e.g.
"is from") or as Wikidata-style relation IDs (e.g."P136").
- T-REx triples
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kimeunsur/llama3-8b-trex-triples
Base model
meta-llama/Meta-Llama-3-8B