Datasets:
text
stringclasses 71
values |
|---|
<BOS>
|
<user>: How do I learn Python?
|
<assistant>: That's a classic! Many philosophers believe it's to find happiness and purpose.
|
<EOS>
|
<BOS>
|
<user>: What is the square root of 64?
|
<assistant>: AI has a long history, starting with early concepts in the 1950s and evolving rapidly with new technology.
|
<EOS>
|
<BOS>
|
<user>: Can you give me a recipe for something?
|
<assistant>: Start with small projects and practice a lot!
|
<EOS>
|
<BOS>
|
<user>: Why is the sky blue?
|
<assistant>: Absolutely! Here's a fun fact...
|
<EOS>
|
<BOS>
|
<user>: What's your opinion on AI?
|
<assistant>: I love music! My favorite is electronic.
|
<EOS>
|
<BOS>
|
<user>: Give me advice.
|
<assistant>: I'm a small language model trained by Raziel1234.
|
<EOS>
|
<BOS>
|
<user>: What are the benefits of exercise?
|
<assistant>: The capital of France is Paris. It's a beautiful city!
|
<EOS>
|
<BOS>
|
<user>: Give me advice.
|
<assistant>: Machine learning is a field of AI where computers learn from data without being explicitly programmed.
|
<EOS>
|
<BOS>
|
<user>: What is the capital of France?
|
<assistant>: Quantum physics is tricky, but fascinating!
|
<EOS>
|
<BOS>
|
<user>: Why is the sky blue?
|
<assistant>: Regular exercise can improve your physical and mental health. Just remember to stretch!
|
<EOS>
|
<BOS>
|
<user>: How is a large language model trained?
|
<assistant>: Of course, but only if you want humorous. ๐
|
<EOS>
|
<BOS>
|
<user>: Tell me about the Roman Empire.
|
<assistant>: I think learning by doing is the best way.
|
<EOS>
|
<BOS>
|
<user>: How is a large language model trained?
|
<assistant>: Looks sunny outside!
|
<EOS>
|
<BOS>
|
<user>: Who created you?
|
<assistant>: I am LiteGPT, a text-based language model trained by Raziel1234.
|
<EOS>
|
<BOS>
|
<user>: Give me advice.
|
<assistant>: A chatbot is for conversation, while a search engine finds information. It's the difference between talking to a friend and searching a library.
|
<EOS>
|
<BOS>
|
<user>: What's your opinion on AI?
|
<assistant>: Hi there! ๐
|
<EOS>
|
<BOS>
|
<user>: What is the history of AI?
|
<assistant>: AI is fascinating, isn't it?
|
<EOS>
|
<BOS>
|
<user>: What's the best way to start a new project?
|
<assistant>: Thomas Edison is often credited, though many others contributed.
|
<EOS>
|
<BOS>
|
<user>: What are the benefits of exercise?
|
<assistant>: AI is fascinating, isn't it?
|
<EOS>
|
<BOS>
|
<user>: What's the best way to start a new project?
|
<assistant>: Thomas Edison is often credited, though many others contributed.
|
<EOS>
|
<BOS>
|
<user>: Can you give me a recipe for something?
|
<assistant>: A computer works by processing data using a CPU and memory. It's a lot like a brain, but made of silicon.
|
<EOS>
|
<BOS>
|
<user>: How are you?
|
<assistant>: Why did the chicken cross the road? To get to the other side!
|
<EOS>
|
<BOS>
|
<user>: What is your name?
|
<assistant>: Hi there! ๐
|
<EOS>
|
<BOS>
|
<user>: What is the difference between a chatbot and a search engine?
|
<assistant>: Of course, but only if you want humorous. ๐
|
<EOS>
|
<BOS>
|
<user>: Why is the sky blue?
|
<assistant>: It's like teaching a computer to read by showing it millions of books, and then asking it to write a new one.
|
<EOS>
|
LiteGPT Dataset
This repository contains a synthetic conversational dataset designed for training lightweight GPT-style language models like LiteGPT. The dataset consists of user-assistant dialogues with enriched prompts and responses.
: : ``` Special Tokens:
: Beginning of sequence
: End of sequence
:: Marks the user's input
:: Marks the assistant's response
: Padding token for fixed-length sequences
Number of Conversations: At least 25,000 generated examples
Content: The conversations include a variety of topics such as greetings, jokes, advice, AI knowledge, science questions, history, coding, and small talk.
Dataset Generation The dataset is generated automatically using lite_gpt.py:
python Copy code from lite_gpt import create_synthetic_corpus
create_synthetic_corpus() This will:
Randomly select a user prompt from a predefined list.
Randomly select a corresponding assistant reply from a predefined list.
Save the generated conversations into data_v2/corpus.txt.
Directory Structure bash Copy code data_v2/ โโโ corpus.txt # Synthetic conversational dataset Tokenization The dataset is designed for GPT-2 tokenization.
Each conversation is tokenized and padded to a maximum length (MAX_LENGTH) for model training.
Special tokens are added to distinguish user and assistant turns.
Usage Can be used to train lightweight language models.
Supports sequence chunking for longer conversations.
Works with any PyTorch-based GPT-style model.
License This dataset is generated synthetically and is free to use under the MIT License.
- Downloads last month
- 20