language: en
license: mit
tags:
- mistral
- lora
- unsloth
- career-coach
- resume-analysis
- fine-tuning
- llm
datasets:
- UpdatedResumeDataSet
- Dataset-Project-404
model-index:
- name: Resume & Career Coach LLM
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Career Coach Resume Dataset
type: custom
metrics:
- name: qualitative_evaluation
type: human
value: Consistent and context-aware feedback on resumes
Overview
A fine-tuned Mistral-7B model using LoRA + Unsloth that acts as an AI career advisor.
It analyzes resumes and provides personalized feedback, job role suggestions, and skill recommendations.
Training Details
- Base model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- Fine-tuning method: LoRA (PEFT)
- Frameworks: Unsloth, Transformers, TRL
Dataset
UpdatedResumeDataSet.csv— labeled resume dataDataset Project 404.xlsx— multiple-intelligence career mapping
Features
- Provides resume score and feedback
- Suggests suitable job roles and upskilling paths
- Lightweight 4-bit fine-tuned model for efficient inference
Intended Use
This model is intended for educational and research purposes related to AI-driven career guidance systems.
License
MIT License Overview: A fine-tuned Mistral-7B model using LoRA + Unsloth that acts as an AI career advisor. It analyzes resumes and provides personalized feedback, job role suggestions, and skill recommendations.
Training Details:
Base model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
Fine-tuning method: LoRA (PEFT)
Frameworks: Unsloth, Transformers, TRL
Dataset:
UpdatedResumeDataSet.csv – labeled resume data
Dataset Project 404.xlsx – multiple-intelligence career mapping
Features:
Upload a resume (.pdf, .docx, .txt) or paste text
Receive personalized resume feedback
Get career path and skill recommendations
Runs locally or via Hugging Face Spaces
Tech Stack: Python, Unsloth, Hugging Face Transformers, PEFT, Gradio, PyPDF2, python-docx
- Developed by: anerudh10
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
