YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

phi2-squadv2-merged - bnb 8bits

Original model description:

tags: - merge - mergekit - lazymergekit - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 base_model: - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2 - keskin-oguzhan/phi2-squadv2

phi2-squadv2-merged

phi2-squadv2-merged is a merge of the following models using LazyMergekit:

馃З Configuration

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 8]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [4, 12]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [8, 16]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [12, 20]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [16, 24]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [20, 28]
    model: keskin-oguzhan/phi2-squadv2
- sources:
  - layer_range: [24, 32]
    model: keskin-oguzhan/phi2-squadv2
Downloads last month
1
Safetensors
Model size
4.67B params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support