ICONN 1: The New Era of Open-Source AI

Community Article Published June 16, 2025

GPU-poor?
Using less than 3× A100s? No problem. Try our Lite version: ICONN 0.5 Mini (8B parameters).


🧠 Emotional Context Awareness

ICONN 1 interprets emotional tone and adjusts its vocabulary, style, and delivery—creating emotionally responsive, human-like conversations.

⚙️ ICONN Emotional Core (IEC)

Notice: Not available on Hugging Face

IEC powers ICONN’s emotional intelligence with millions of micro-agents, simulating billions of emotional states and context-aware reactions.


🧩 Reasoning + Relating

ICONN is more than logic. Its relational architecture supports storytelling, coaching, collaboration, and creative conversation. It thinks with you, not just for you.


🧠 What Is in the ICONN MoE?

ICONN is a Mixture of Experts (MoE) model. Each user message is routed through the most relevant expert based on keyword and semantic intent.

User Input Expert Chosen
"Hi!" ICONN-Base
"What is physics?" ICONN-e1-Science
"Explain how to cube a number." ICONN-e1

Expert Descriptions

  • ICONN-1: Base conversational model.
  • ICONN-e1-Science: Expert for science reasoning tasks, fine-tuned on academic data.
  • ICONN-e1: General reasoning model.
  • ICONN-Writer: Creative writing expert, fine-tuned for narrative fluency.

🚀 Usage

⚠️ Minimum Requirements

  • 4× Nvidia A100 or 1× Nvidia B100
  • 120GB RAM
  • 120–192GB VRAM

If your system doesn’t meet this, you can:


🧪 Code Example

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch

def run_iconn_chatbot(model_name="Enderchef/ICONN-1"):
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)
    
    device = 0 if torch.cuda.is_available() else -1
    
    chat_pipeline = pipeline(
        "text-generation",
        model=model,
        tokenizer=tokenizer,
        device=device,
        max_length=1624,
        do_sample=True,
        top_p=0.9,
        temperature=0.4,
        pad_token_id=tokenizer.eos_token_id
    )
    
    print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.")
    conversation_history = ""
    
    while True:
        user_input = input("You: ")
        if user_input.lower() == "exit":
            print("Goodbye!")
            break
        
        conversation_history += f"User: {user_input}\nBot:"
        response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text']
        bot_reply = response[len(conversation_history):].strip().split("\n")[0]
        
        print(f"Bot: {bot_reply}")
        conversation_history += f" {bot_reply}\n"

if __name__ == "__main__":
    run_iconn_chatbot()

##📦 Model Info

Parameters: 84B

Precision: BF16

Format: Safetensors

Downloads (Last 30d): 0

Spaces Using ICONN 1:

Community

Sign up or log in to comment