dpo_security_4-2-sft
This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the security_code_dpo_4_2_SFT dataset.
Model description
This model has been fine-tuned using SFT (Supervised Fine-Tuning) on a security-focused code dataset. It is designed to generate more secure code by avoiding common security vulnerabilities and following best practices for secure coding.
Intended uses & limitations
This model is intended for code generation tasks where security is a priority. It aims to reduce common vulnerabilities in generated code such as SQL injection, XSS, CSRF, and other security issues.
While this model has been trained to generate more secure code, it should not be solely relied upon for security-critical applications without proper code review by security experts.
Training and evaluation data
The model was trained on a curated dataset of security-focused code examples, with an emphasis on secure coding patterns and practices. The training data includes code segments that demonstrate proper input validation, authentication, authorization, secure data handling, and other security best practices.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 3
Model tree for Easonnoway/codellama_rq1_sft
Base model
codellama/CodeLlama-7b-Instruct-hf