Update README.md
Browse files
README.md
CHANGED
|
@@ -14,4 +14,71 @@ tags:
|
|
| 14 |
- qwen3-8b
|
| 15 |
- symbiotic
|
| 16 |
- symbtioicai
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
|
|
|
| 14 |
- qwen3-8b
|
| 15 |
- symbiotic
|
| 16 |
- symbtioicai
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# SymbioticLM-8B
|
| 20 |
+
**Model Type**: Hybrid Symbolic–Transformer
|
| 21 |
+
**Base Model**: Qwen-8B
|
| 22 |
+
**Framework**: PyTorch + Transformers-compatible
|
| 23 |
+
**Purpose**: Long-memory symbolic reasoning + high-fidelity language generation
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## Overview
|
| 28 |
+
|
| 29 |
+
SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## Architecture Highlights
|
| 34 |
+
|
| 35 |
+
- **Backbone**: Qwen-8B rotary transformer
|
| 36 |
+
- **Symbolic Dim**: 4096
|
| 37 |
+
- **Symbolic Modules**:
|
| 38 |
+
- ThoughtDynamicsLNN (multi-head LSTM attention)
|
| 39 |
+
- CrystallineProcessor (DNAConv GNN)
|
| 40 |
+
- LiquidThoughtProcessor (recurrent symbol folding)
|
| 41 |
+
- HelicalDNAProcessor (helical linear projection)
|
| 42 |
+
- **Memory**: 2048 symbolic vectors (float32) with entropy-aware retrieval and contextual recall
|
| 43 |
+
- **Dream Mode**: Self-generates symbolic cognition offline
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
## Files Included
|
| 48 |
+
|
| 49 |
+
| File | Description |
|
| 50 |
+
|--------------------------|-------------------------------------------------------|
|
| 51 |
+
| `model.bin` | PyTorch weights (LFS tracked) |
|
| 52 |
+
| `model.safetensors` | Same weights in `safetensors` format (recommended) |
|
| 53 |
+
| `memory.pt` | Symbolic memory snapshot (entropic, pretrained) |
|
| 54 |
+
| `config.json` | Base model configuration |
|
| 55 |
+
| `generation_config.json` | Sampling and decoding config (temperature, top_p, etc.)|
|
| 56 |
+
| `tokenizer.json` | Tokenizer data with custom tags and structure |
|
| 57 |
+
| `added_tokens.json` | Extra tokens like `<THM>`, `<PROOF>`, `<D_EPS>` |
|
| 58 |
+
| `special_tokens_map.json`| Maps for special tokens used during generation |
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## Intended Uses
|
| 63 |
+
|
| 64 |
+
- General symbolic reasoning and logical conversation
|
| 65 |
+
- Memory-aware tutoring, research assistants
|
| 66 |
+
- Code + math proof modeling
|
| 67 |
+
- Context-persistent dialogue systems
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
## Limitations
|
| 72 |
+
|
| 73 |
+
- Not instruction-tuned (e.g., chat-style inputs may require prompt engineering)
|
| 74 |
+
- Larger memory buffer may increase CPU load slightly
|
| 75 |
+
- Symbolic inference is offline-evolved; memory must be actively seeded
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
## Citations
|
| 80 |
+
|
| 81 |
+
This model was independently developed by Roy S. Colca Jr.
|
| 82 |
+
Please credit "SymbioticLM" if using symbolic components in downstream applications.
|
| 83 |
+
|
| 84 |
---
|