Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -12,3 +12,84 @@ short_description: BERT1+BERT2
|
|
12 |
---
|
13 |
|
14 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
15 |
+
|
16 |
+
|
17 |
+
---
|
18 |
+
title: NewFakeNewsModel
|
19 |
+
emoji: ⚡
|
20 |
+
colorFrom: purple
|
21 |
+
colorTo: gray
|
22 |
+
sdk: gradio
|
23 |
+
sdk_version: 5.34.2
|
24 |
+
app_file: app.py
|
25 |
+
pinned: false
|
26 |
+
license: mit
|
27 |
+
short_description: wrk on prgress
|
28 |
+
---
|
29 |
+
|
30 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
31 |
+
|
32 |
+
|
33 |
+
# Fake News Classifier (BERT-based)
|
34 |
+
|
35 |
+
This project detects whether a news article is real or fake using a fine-tuned BERT model for binary text classification.
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
## Disclaimer
|
40 |
+
|
41 |
+
- This project is for **educational and experimental purposes only**.
|
42 |
+
- It is **not suitable for real-world fact-checking** or serious decision-making.
|
43 |
+
- The model uses a simple binary classifier and does not verify factual correctness.
|
44 |
+
|
45 |
+
---
|
46 |
+
|
47 |
+
## Project Overview
|
48 |
+
|
49 |
+
This fake news classifier was built as part of a research internship to:
|
50 |
+
|
51 |
+
- Learn how to fine-tune transformer models on classification tasks
|
52 |
+
- Practice handling class imbalance using weighted loss
|
53 |
+
- Deploy models using Hugging Face-compatible APIs
|
54 |
+
|
55 |
+
---
|
56 |
+
|
57 |
+
## How It Works
|
58 |
+
|
59 |
+
- A BERT-based model (`bert-base-uncased`) was fine-tuned on a labeled dataset of news articles.
|
60 |
+
- Input text is tokenized using `BertTokenizer`.
|
61 |
+
- A custom Trainer with class-weighted loss was used to handle class imbalance.
|
62 |
+
- Outputs are binary: **0 = FAKE**, **1 = REAL**.
|
63 |
+
|
64 |
+
### Training Details
|
65 |
+
|
66 |
+
- Model: `BertForSequenceClassification`
|
67 |
+
- Epochs: 4
|
68 |
+
- Batch size: 8
|
69 |
+
- Learning rate: 2e-5
|
70 |
+
- Optimizer: AdamW (via Hugging Face Trainer)
|
71 |
+
- Evaluation Metrics: Accuracy, F1-score, Precision, Recall
|
72 |
+
|
73 |
+
---
|
74 |
+
|
75 |
+
## 🛠 Libraries Used
|
76 |
+
|
77 |
+
- `transformers`
|
78 |
+
- `datasets`
|
79 |
+
- `torch`
|
80 |
+
- `scikit-learn`
|
81 |
+
- `pandas`
|
82 |
+
- `nltk` (optional preprocessing)
|
83 |
+
|
84 |
+
---
|
85 |
+
|
86 |
+
## 📦 Installation & Running
|
87 |
+
|
88 |
+
```bash
|
89 |
+
pip install -r requirements.txt
|
90 |
+
python app.py
|
91 |
+
```
|
92 |
+
|
93 |
+
Or run the training script in a notebook or script environment if you're using Google Colab or Jupyter.
|
94 |
+
|
95 |
+
---
|