CodeLlama Security-Aligned Model (RQ1)
This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the security_code_dpo_4-2 dataset. It was trained using Direct Preference Optimization (DPO) to improve the security of generated code.
Model description
This model has been trained to prefer generating secure code over insecure code. It avoids common security vulnerabilities and follows best practices for secure coding.
Intended uses & limitations
This model is intended for code generation tasks where security is a priority. It aims to reduce common vulnerabilities in generated code such as SQL injection, XSS, CSRF, and other security issues.
Training and evaluation data
The model was trained using pairs of secure and insecure code examples, where the model was optimized to prefer the secure variants.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
Training results
Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 4
Model tree for Easonnoway/codellama_rq1_tsp
Base model
codellama/CodeLlama-7b-Instruct-hf