|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- llm |
|
|
- newspaper |
|
|
- journal |
|
|
- old |
|
|
- pre1950 |
|
|
- 1800s |
|
|
pretty_name: Pre-1950s Text Dataset |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# A dataset of pre-1950 English text |
|
|
|
|
|
This is a high-quality thoroughly-curated 100+ GB dataset of English-only text |
|
|
written before 1950-01-01. It was collected for the purpose of training LLMs, |
|
|
initially a small 125M model (Archibald-125M) and later a 3B or 7B model |
|
|
(depending on funding). |
|
|
|
|
|
## Why train an LLM on old text? |
|
|
|
|
|
One unanswered question about LLMs is "can they invent?". Given how much they |
|
|
know about the world, it's somewhat surprising that LLMs seem to have |
|
|
difficulty with making innovative connections about the world (although they |
|
|
get better every day). |
|
|
|
|
|
Because Archibald-125M has a knowledge cutoff of 1950, we are able to quiz and |
|
|
prompt until Archie figures out something that's a novel invention for 1950 but |
|
|
well-established science in the current day. We can provide more detailed and |
|
|
more helpful hints to Archie until the answer is blindingly obvious, and then |
|
|
examine what is required for an LLM to make a discovery. Importantly, this |
|
|
process can be automated, using _modern_ LLMs to prompt and evaluate the |
|
|
outputs of Archie. |
|
|
|
|
|
Beyond discovering the nature of invention, text before 1950 has several |
|
|
qualities which make it interesting: |
|
|
|
|
|
- Significantly different data distribution. Almost all modern LLMs are trained |
|
|
on a corpus of data created mostly after the internet, and a majority of that |
|
|
is after the internet boom in the 2000s. Archibald is possibly the only LLM |
|
|
with a significantly different but still human-generated training distribution. |
|
|
- Archaic moral views: In the 1950s most women could not open bank accounts, |
|
|
sign leases, or own property in their name without a male guardian. Being |
|
|
homosexual was illegal, racial segregation was commonplace. Probing Archie |
|
|
about the morality of these points might provide insight into how we can |
|
|
probe modern LLMs about our modern moral failures. |
|
|
|
|
|
The primary goal is to figure out the nature of invention, but it's likely |
|
|
there'll be many interesting side-quests along the way. |
|
|
|
|
|
## Some events after 1950 which Archibald-125M doesn't know about |
|
|
|
|
|
- 1950: **First credit card** (Diners Club). Before this: People paid in cash or by cheque. Store credit was local, and debt tracking was manual. |
|
|
- 1952: **Polio vaccine** (Salk). Before this: Tens of thousands per year were paralyzed or killed by polio. Children avoided public swimming pools during outbreaks. |
|
|
- 1953: **Death of Stalin**. Before this: Stalin's dictatorship controlled the Soviet Union with mass purges and forced labor camps. Eastern Europe remained locked behind the Iron Curtain. |
|
|
- 1953: **Discovery of DNA structure** (Watson & Crick). Before this: Heredity was understood abstractly. No genetic engineering, paternity testing, or DNA forensics. |
|
|
- 1954: **Brown v. Board of Education** (USA). Before this: Racial segregation was legal in schools. Black children attended underfunded “separate but equal” schools. |
|
|
- 1955: **Rosa Parks arrested** / Bus boycott. Before this: Black passengers were legally forced to give up seats for white riders in much of the U.S. South. |
|
|
- 1957: **Launch of Sputnik**. Before this: No artificial satellites. Global communications were limited to undersea cables and radio. Weather forecasts were rudimentary. |
|
|
- 1958: **NASA founded**. Before this: No civilian space program. Military handled missile research; spaceflight was science fiction. |
|
|
- 1959: **First commercial photocopier** (Xerox 914). Before this: Copies were made with carbon paper, mimeographs, or by hand. Reproducing documents was slow and messy. |
|
|
- 1960: **First laser**. Before this: No barcode scanning, laser surgery, or optical fiber communication. |
|
|
- 1961: **Yuri Gagarin orbits Earth**. Before this: No human had been to space. Space exploration was theoretical; Earth was the only world we’d seen directly. |
|
|
- 1963: **Assassination of JFK**. Before this: U.S. politics was in a post-war optimism phase. After: Deepened Cold War tensions and conspiracy culture. |
|
|
- 1964: **U.S. Civil Rights Act**. Before this: Legal segregation and open discrimination in housing, employment, and voting. Jim Crow laws were enforced in the South. |
|
|
- 1965: **Moore’s Law** proposed. Before this: Computing power was scarce. Computers filled rooms, used punch cards, and served governments or large corporations. |
|
|
- 1967: **First heart transplant** (South Africa) Before this: End-stage heart failure meant death. No organ transplants; no immunosuppressive treatment. |
|
|
- 1969: **Apollo 11 Moon Landing**. Before this: The Moon was unreachable. Space travel was a Cold War dream and sci-fi trope. |
|
|
- 1971: **Intel 4004** (first commercial microprocessor). Before this: Computers were assembled from separate logic circuits. No personal computing. Embedded electronics were rare. |
|
|
- 1972: **Watergate** scandal begins. Before this: Presidential power was largely unchecked in public perception. The scandal triggered a wave of investigative journalism and public distrust. |
|
|
- 1973: **First mobile call** (Motorola prototype). Before this: Phones were tethered to landlines. Calling meant finding a telephone booth or home line. |
|
|
- 1973: **Oil Crisis** / OPEC embargo. Before this: Western nations assumed oil supply was cheap and endless. After: Gasoline rationing, speed limits, and birth of modern energy policy. |
|
|
- 1975: **Personal computers** begin (Altair 8800). Before this: Only corporations or universities used computers. Home computing was unimaginable. |
|
|
- 1981: **IBM PC** released. Before this: Hobbyist computers were inconsistent. This standardized architecture for business and home computing. |
|
|
- 1983: **First mobile phone** sold commercially (Motorola DynaTAC). Before this: Communication on the move meant CB radio or pagers. Businesspeople were tied to their desks. |
|
|
- 1984: **DNA fingerprinting** invented. Before this: Criminal evidence relied on fingerprints, blood types, and eyewitnesses. Paternity was legally disputed without hard evidence. |
|
|
- 1989: **Fall of Berlin Wall**. Before this: Germany was split, and Eastern Europe was under Soviet domination. Movement across the Iron Curtain was deadly. |
|
|
- 1990: **World Wide Web** invented (Tim Berners-Lee). Before this: The internet existed for scientists and the military, but was text-only, obscure, and difficult to use. |
|
|
- 1995: **GPS becomes publicly available**. Before this: Navigation relied on paper maps, compasses, and asking for directions. |
|
|
- 1996: **Dolly the sheep** cloned. Before this: Cloning of mammals was thought impossible. Genetics was still largely experimental. |
|
|
- 1998: **Google founded**. Before this: Internet search was poor. Engines like AltaVista and Yahoo listed results manually or poorly ranked. |
|
|
- 1999: **Introduction of Bluetooth**. Before this: No short-range wireless communication. Devices had to connect physically or over infrared. |
|
|
- 2001: **9/11** attacks on U.S.. Before this: Air travel was relatively relaxed. Global terrorism was not the focus of national security. |
|
|
- 2003: **Human Genome Project** completed. Before this: Human genetics was understood in fragments. Precision medicine was impossible. |
|
|
- 2004: **Facebook** launched. Before this: Social life online was fragmented (forums, IRC, email lists). No centralized digital social identity. |
|
|
- 2007: **iPhone** released. Before this: Phones were mainly for calling/texting. No universal internet access in your pocket. |
|
|
- 2008: **Global Financial Crisis**. Before this: Housing was considered a safe investment. After: Global austerity and mass unemployment. |
|
|
- 2012: **CRISPR** used for gene editing. Before this: Gene editing was imprecise, slow, and expensive. |
|
|
- 2016: **Brexit** referendum. Before this: EU membership seemed permanent. Britain’s vote marked a turn in global politics toward nationalism. |
|
|
- 2016: **AlphaGo** shows that deep learning surpasses human performance in Go. Before this: AI was limited to narrow tasks. After: Widespread fear and hype around general intelligence. |
|
|
|
|
|
# Data sources to investigate |
|
|
|
|
|
- wikipedia looks like it's got a big list of newspaper archives: https://en.wikipedia.org/wiki/Wikipedia:List_of_online_newspaper_archives |
|
|
- also see https://github.com/haykgrigo3/TimeCapsuleLLM |
|
|
|
|
|
# Data Sources in use |
|
|
|
|
|
## Project Gutenberg |
|
|
|
|
|
Download (~8GB), excluding most file types except for `.txt` (while keeping |
|
|
unknown file types, in case they're useful): |
|
|
|
|
|
``` |
|
|
rsync -av --del \ |
|
|
--include='*/' \ |
|
|
--include='*.txt' \ |
|
|
--include='*.TXT' \ |
|
|
--include='*.text' \ |
|
|
--exclude='*' \ |
|
|
--info=progress2 \ |
|
|
ftp.ibiblio.org::gutenberg \ |
|
|
data/gutenberg |
|
|
``` |
|
|
|
|
|
List all unique file extensions: |
|
|
|
|
|
``` |
|
|
find data/gutenberg/ -type f | sed -n 's/.*\.\([^.\/]\+\)$/\1/p' | sort -u |
|
|
``` |
|
|
|
|
|
Afterwards (or during) the download, there'll be a lot of non-text files. |
|
|
Remove them using: |
|
|
|
|
|
``` |
|
|
find data/gutenberg/ -type f \( -iname '*.m4a' -o -iname '*.m4b' -o -iname '*.gif' -o -iname '*.jpg' -o -iname '*.jpeg' -o -iname '*.html' -o -iname '*.htm' -o -iname '*.png' -o -iname '*.mp3' -o -iname '*.rst' -o -iname '*.rtf' -o -iname '*.doc' -o -iname '*.lit' -o -iname '*.xml' -o -iname '*.iso.*' -o -iname '*.prc' \) -delete |
|
|
``` |
|
|
|
|
|
List all txt files and their sizes (in human readable numbers) |
|
|
|
|
|
``` |
|
|
find data/gutenberg -type f -iname '*.txt' | xargs du -h -c | sort -h |
|
|
``` |
|
|
|
|
|
Get a list of all non-English text files (based on Gutenberg's own `Language: $FOOBAR` label): |
|
|
|
|
|
``` |
|
|
rg -uu '^Language:' data/gutenberg | rg -v 'Language: English' | sed 's/:.*$//' |
|
|
``` |
|
|
|
|
|
And to delete them, pipe to xargs rm -v |
|
|
|
|
|
``` |
|
|
rg -uu '^Language:' data/gutenberg | rg -v 'Language: English' | sed 's/:.*$//' | sort -u | xargs -r rm -v |
|
|
``` |
|
|
|
|
|
Need to remove project Gutenberg header and footer: |
|
|
|
|
|
``` |
|
|
*** START OF THE PROJECT GUTENBERG EBOOK 10486 *** |
|
|
|
|
|
Provided by McGuinn's Folk Den (http://www.ibiblio.org/jimmy/folkden) |
|
|
|
|
|
[...] |
|
|
|
|
|
*** END OF THE PROJECT GUTENBERG EBOOK 10486 *** |
|
|
``` |
|
|
|
|
|
And also some transcriber's notes: |
|
|
|
|
|
``` |
|
|
[Transcriber's Note: Printers' errors have been marked with the notation |
|
|
** . There are a few special characters in the section on Erasmus Darwin; |
|
|
macrons (a straight line over a letter) are denoted [=x] and breves |
|
|
(the bottom half of a circle over a letter) are denoted [)x].] |
|
|
|
|
|
``` |
|
|
|
|
|
And also anything published after 1950 |
|
|
|
|
|
And also anything not in English |
|
|
|
|
|
Hmm. A bit problematic, Project Gutenberg explicitly does not include the |
|
|
original publication date of the items in their catalogue |
|
|
[link](www.gutenberg.org/ebooks/offline_catalogs.html#the-gutindex-listings-of-ebooks): |
|
|
|
|
|
> Project Gutenberg metadata does not include the original print source |
|
|
> publication date(s). Because Project Gutenberg eBooks are substantially |
|
|
> different from the source book(s), we track the Project Gutenberg publication |
|
|
> date (“release date”), but do not include print source information in the |
|
|
> metadata. |
|
|
|
|
|
So we'll need to date all the items manually. Hrmm |
|
|
|
|
|
## Chronicling America |
|
|
|
|
|
Information: https://chroniclingamerica.loc.gov/ocr/ |
|
|
|
|
|
JSON listing of files: https://chroniclingamerica.loc.gov/ocr.json |
|
|
|
|
|
Download the full dataset, one archive at a time (total size is 2 115 GB): |
|
|
|
|
|
``` |
|
|
uv run src/download_chronicling_america.py |
|
|
``` |
|
|
|
|
|
Conveniently, they're all organised by date, so we can find all directories |
|
|
indicating content after 1950 and delete them. Use -depth to not traverse too |
|
|
deep into all the subdirectories: |
|
|
|
|
|
(this is after the shuffle around to reduce inode usage) |
|
|
|
|
|
``` |
|
|
# preview |
|
|
$ find . -regextype posix-extended -mindepth 4 -maxdepth 4 -type f \ |
|
|
-regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01])/.+' -print |
|
|
|
|
|
# DELETE |
|
|
$ find . -regextype posix-extended -mindepth 4 -maxdepth 4 -type f \ |
|
|
-regex '.*/(1950|19[5-9][0-9]|20[0-9]{2})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01])/.+' -print -delete |
|
|
``` |
|
|
|
|
|
We'll also want to delete all the XML files: |
|
|
|
|
|
``` |
|
|
find -type f -iname '*.xml' -delete |
|
|
``` |
|
|
|
|
|
(this will clear a few hundred GBs, current versions of the download python |
|
|
script will auto-delete after extraction) |
|
|
|
|
|
TODO the resulting files are pretty bad. The OCR has many many artefacts, and |
|
|
not all of them are obvious how to fix, since the source scans/images aren't |
|
|
available apparently. Not sure how to fix these without using modern LLMs and |
|
|
potentially infecting the dataset. |
|
|
|
|
|
At some point, I ran out of inodes. Removing XML files helped, but didn't |
|
|
solve the issue. This shell command was able to remove leaf directories: |
|
|
|
|
|
``` |
|
|
find data -mindepth 1 -depth -type d -empty -print -delete |
|
|
``` |
|
|
|
|
|
This command lists inode usage: |
|
|
|
|
|
``` |
|
|
$ df -i / |
|
|
Filesystem Inodes IUsed IFree IUse% Mounted on |
|
|
/dev/mapper/ubuntu--vg-ubuntu--lv 60M 60M 313K 100% / |
|
|
``` |
|
|
|
|
|
See where most of the inodes are going |
|
|
|
|
|
``` |
|
|
$ du --inodes -d1 data/gutenberg | sort -nr | head |
|
|
154K total |
|
|
154K data/gutenberg |
|
|
44K data/gutenberg/1 |
|
|
43K data/gutenberg/4 |
|
|
42 data/gutenberg/0 |
|
|
35K data/gutenberg/3 |
|
|
33K data/gutenberg/2 |
|
|
``` |
|
|
|
|
|
Go through `az_bentonite_ver02`, and flatten the directory structure from |
|
|
`az_bentonite_ver02/thing/year/month/day/edition/sequence/orc.txt` down to |
|
|
`az_bentonite_ver02/thing/year-month-day/edition/sequence/orc.txt` |
|
|
|
|
|
``` |
|
|
find az_bentonite_ver02 -regextype posix-extended -type d -depth \ |
|
|
-regex '.*/[0-9]{4}/[0-9]{2}/[0-9]{2}$' \ |
|
|
| while IFS= read -r d; do |
|
|
y=$(basename "$(dirname "$(dirname "$d")")") |
|
|
m=$(basename "$(dirname "$d")") |
|
|
dd=$(basename "$d") |
|
|
root=$(dirname "$(dirname "$(dirname "$d")")") |
|
|
tgt="$root/$y-$m-$dd" |
|
|
[ -e "$tgt" ] && { echo "skip (exists): $tgt"; continue; } |
|
|
mv "$d" "$tgt" |
|
|
rmdir -p "$(dirname "$d")" 2>/dev/null || true # remove empty month/year |
|
|
done |
|
|
``` |
|
|
|
|
|
## Biodiversity Heritage Library |
|
|
|
|
|
60+ million pages of OCR content (~41 GB compressed, 138GB uncompressed) |
|
|
|
|
|
Download: |
|
|
|
|
|
``` |
|
|
BHL_URL="https://smithsonian.figshare.com/ndownloader/files/52893371" |
|
|
mkdir -p data/bhl && curl -L "$BHL_URL" | tar -xj -C data/bhl |
|
|
``` |
|
|
|
|
|
NOTE: the download was weird for me, and I struggled. Eventually I got it to |
|
|
work, but there's a bunch of redirects and really short lifetimes on the links. |
|
|
Good luck. |
|
|
|
|
|
From |
|
|
|
|
|
``` |
|
|
https://smithsonian.figshare.com/articles/dataset/BHL_Optical_Character_Recognition_OCR_-_Full_Text_Export_new_/21422193?file=52893371 |
|
|
``` |
|
|
|
|
|
Remove non-text files: |
|
|
|
|
|
``` |
|
|
find data/bhl/ -type f \( -iname '*.jpg' -o -iname '*.jpeg' -o -iname '*.html' -o -iname '*.htm' -o -iname '*.png' -o -iname '*.mp3' -o -iname '*.rst' -o -iname '*.rtf' -o -iname '*.doc' -o -iname '*.lit' -o -iname '*.xml' -o -iname '*.prc' \) -delete |
|
|
``` |
|
|
|
|
|
## Archive.org |
|
|
|
|
|
The script `src/download_archive_dot_org.py` will download an _index_ of all |
|
|
the archive files matching the above query. These indices take up about 259MB, |
|
|
and will be stored to `data/archive-dot-org/indices/`, containing the date, the |
|
|
id, and the size of the item in bytes. The ID can then be used to download the |
|
|
actual files. To download all the text files associated with the IDs listed in |
|
|
a file `./itemlist.txt`, you can use this command: |
|
|
|
|
|
``` |
|
|
wget \ |
|
|
--recursive \ |
|
|
--span-hosts \ |
|
|
--no-clobber \ |
|
|
--no-parent \ |
|
|
--no-host-directories \ |
|
|
--cut-dirs=1 \ |
|
|
--accept=txt \ |
|
|
--execute robots=off \ |
|
|
--level=1 \ |
|
|
--input-file=./itemlist.txt \ |
|
|
--base='http://archive.org/download/' |
|
|
``` |
|
|
|
|
|
- Dataset query (1800-1950): https://archive.org/search?query=date%3A%5B1800-01-01%20TO%201949-12-31%5D |
|
|
- Advanced search: https://archive.org/advancedsearch.php |
|
|
- Query: `mediatype:(texts) AND language:(English) AND date:[1800-01-01 TO 1949-12-31]` |
|
|
|
|
|
Better query: |
|
|
|
|
|
- `mediatype:(texts) AND (language:eng OR language:"English") AND date:[1800-01-01 TO 1949-12-31]` |
|
|
- fields of interest: |
|
|
- creator |
|
|
- date |
|
|
- downloads |
|
|
- identifier |
|
|
- item_size |
|
|
- subject |
|
|
- title |
|
|
|
|
|
## US Post Office |
|
|
|
|
|
(requires an API key) |
|
|
|
|
|
https://data.uspto.gov/apis/getting-started |
|
|
|
|
|
## Hathi Trust |
|
|
|
|
|
> HathiTrust was founded in 2008 as a not-for-profit collaborative of academic |
|
|
> and research libraries now preserving 18+ million digitized items in the |
|
|
> HathiTrust Digital Library. We offer reading access to the fullest extent |
|
|
> allowable by U.S. and international copyright law, text and data mining tools |
|
|
> for the entire corpus, and other emerging services based on the combined |
|
|
> collection. |
|
|
|
|
|
Looks like it has a lot of information, although this might all just be |
|
|
duplicated from the data available in the Internet Archive. Also it's less |
|
|
easy to download, Google Books has some pretty restrictive licensing |
|
|
|
|
|
https://babel.hathitrust.org/cgi/pt?id=mdp.39015082239875&seq=26&format=plaintext |
|
|
|
|
|
[Advanced search URL](https://babel.hathitrust.org/cgi/ls?lmt=ft&a=srchls&adv=1&c=148631352&q1=*&field1=ocr&anyall1=all&op1=AND&yop=before&pdate_end=1949&facet_lang=language008_full%3AEnglish&facet_lang=language008_full%3AEnglish%2C+Middle+%281100-1500%29&facet_lang=language008_full%3AEnglish%2C+Old+%28ca.+450-1100%29&facet_format=format%3ADictionaries&facet_format=format%3AEncyclopedias&facet_format=format%3AJournal&facet_format=format%3AManuscript&facet_format=format%3ANewspaper&facet_format=format%3ABiography&facet_format=format%3ABook) |
|
|
|
|
|
# Cleaning up |
|
|
|
|
|
Will need to remove all references to google, the internet, any URLs, OCR, etc |
|
|
|
|
|
List all file containing at least 1% lines of non-English characters (there's a |
|
|
lot of Hebrew newspapers) |
|
|
|
|
|
``` |
|
|
PAT='[\p{Hebrew}\p{Cyrillic}\p{Greek}\p{Arabic}\p{Hangul}\p{Hiragana}\p{Katakana}]' |
|
|
join -t: -j1 \ |
|
|
<(rg -u -g '*.txt' -P -c "$PAT" | sort -t: -k1,1) \ |
|
|
<(rg -u -g '*.txt' -c '^' | sort -t: -k1,1) \ |
|
|
| awk -F: '{ if ($2 > 0.01*$3) print $1 }' > non-english.txt |
|
|
``` |
|
|
|
|
|
These files you'll probably want to manually check, because OCR does funky |
|
|
stuff imagining non-English characters, and old English texts often had |
|
|
Latin/Greek/Hebrew/French/etc. |
|
|
|
|
|
Also remove files with >50% lines containing German diatribes: |
|
|
|
|
|
``` |
|
|
PAT='[äöüÄÖÜß]' |
|
|
join -t: -j1 \ |
|
|
<(rg -u -g '*.txt' -P -c "$PAT" data/ | sort -t: -k1,1) \ |
|
|
<(rg -u -g '*.txt' -c '^' data/ | sort -t: -k1,1) \ |
|
|
| awk -F: '{ if ($2 > 0.1*$3) print $1 }' > german.txt |
|
|
``` |
|
|
|
|
|
And again with Danish/Swedish |
|
|
|
|
|
``` |
|
|
PAT='[æøåØ]' |
|
|
join -t: -j1 \ |
|
|
<(rg -u -g '*.txt' -P -c "$PAT" data/ | sort -t: -k1,1) \ |
|
|
<(rg -u -g '*.txt' -c '^' data/ | sort -t: -k1,1) \ |
|
|
| awk -F: '{ if ($2 > 0.5*$3) print $1 }' > neurope.txt |
|
|
``` |
|
|
|
|
|
Mega regex to find all dates after 1950: |
|
|
|
|
|
``` |
|
|
rg -n -i -P '\b((jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s+\d{1,2}(st|nd|rd|th)?\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|(jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|\d{1,2}(st|nd|rd|th)?\s+(jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))[-/](0?[1-9]|1[0-2])[-/](0?[1-9]|[12]\d|3[01])|(0?[1-9]|[12]\d|3[01])[-/](0?[1-9]|1[0-2])[-/](1950|19[5-9]\d|20(0\d|1\d|2[0-5])))\b' data/ |
|
|
``` |
|
|
|
|
|
and to delete the files: |
|
|
|
|
|
``` |
|
|
rg --null -n -i -P '\b((jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s+\d{1,2}(st|nd|rd|th)?\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|(jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|\d{1,2}(st|nd|rd|th)?\s+(jan(uary)?|feb(ruary)?|mar(ch)?|apr(il)?|may|jun(e)?|jul(y)?|aug(ust)?|sep(t(ember)?)?|oct(ober)?|nov(ember)?|dec(ember)?)\s*,?\s*(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))|(1950|19[5-9]\d|20(0\d|1\d|2[0-5]))[-/](0?[1-9]|1[0-2])[-/](0?[1-9]|[12]\d|3[01])|(0?[1-9]|[12]\d|3[01])[-/](0?[1-9]|1[0-2])[-/](1950|19[5-9]\d|20(0\d|1\d|2[0-5])))\b' data/ -l | xargs -0 rm -v |
|
|
``` |
|
|
|
|
|
Removing empty directories: |
|
|
|
|
|
``` |
|
|
find data -mindepth 1 -depth -type d -empty -print -delete |
|
|
``` |
|
|
|
|
|
Find some non-english language suspect characters (lots of false-positives due |
|
|
to OCR though): |
|
|
|
|
|
``` |
|
|
rg -m 10 -P '[àáâãäåçèéêëìíîïñòóôõöùúûüÿÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÑÒÓÔÕÖÙÚÛÜŸßæøåÆØÅčďěňřšťžČĎĚŇŘŠŤŽąćęłńśźżĄĆĘŁŃŚŹŻășțâîĂȘȚÂÎğşİıĞŞ]' data/ |
|
|
``` |
|
|
|
|
|
Will also want to remove the euro symbol, since that didn't exist before 1950. |
|
|
|
|
|
Maybe just run a spell-checker over the whole thing and give every file a |
|
|
quality score? or just looking at words based on some dictionary of known |
|
|
English words? |
|
|
|
|
|
## Estimated training costs |
|
|
|
|
|
NanoGPT: |
|
|
|
|
|
- 10B FineWeb tokens -> GPT2 ~accuracy |
|
|
- 40B FineWeb tokens -> GPT3 ~accuracy |
|
|
|
|
|
[TinyLlama 1.1B](https://github.com/jzhang38/TinyLlama): |
|
|
|
|
|
- 3T tokens (data set size: 950B) |
|
|
- just 90 days using 16 A100-40G GPUs |
|
|
- Note that they're training for longer than Meta's Llama trained for |
|
|
|
|
|
SmolLM3: |
|
|
|
|
|
- Hugging Face |
|
|
- 3B |
|
|
- 11T tokens pretraining |
|
|
- 220k GPU hours: 48 nodes, 8xH100 GPUs, 24 days |
|
|
- https://huggingface.co/blog/smollm3 |
|
|
|
|
|
Lambda AI pricing: |
|
|
|
|
|
- $2.69/H100/hour |
|
|
|
|
|
## Better cleaning |
|
|
|
|
|
Okay it looks like I'll have to do some proper cleaning myself. |
|
|
|
|
|
Ideas: |
|
|
|
|
|
- Use regex to un-hard-wrap the words |
|
|
- use basic Regex to find non-English languages and characters |
|
|
- use a dictionary to help correct words |
|
|
- |
|
|
|
|
|
### Cleaning test cases |
|
|
|
|
|
Some test cases for cleaning data: |
|
|
|
|
|
``` |
|
|
==> ./ct_jackson_ver01/sn92051126/1911/10/05/ed-1/seq-3/ocr.txt <== |
|
|
ipv\ ■ .... |
|
|
:5 Until a short time |
|
|
ago, scarcely one |
|
|
person in a thousand |
|
|
had ever tasted a |
|
|
§;■■ really good soda |
|
|
cracker—as it came |
|
|
'• fresh and crisp from |
|
|
the oven. |
|
|
Now every man, |
|
|
``` |
|
|
|
|
|
``` |
|
|
==> ./ct_jackson_ver01/sn92051126/1911/10/05/ed-1/seq-4/ocr.txt <== |
|
|
New HavenUnion |
|
|
i; >T/ ' - .. ' |
|
|
Iqdnusu bv aluxandeb troop. |
|
|
• '4*-----— |
|
|
^.THURSDAY, OCTOBER 5, 1911. |
|
|
r V Notice to Advertisers. |
|
|
vChunge of advertisements must be |
|
|
Ityby 10 o’clock In tbe morning, to ln |
|
|
iim0 the change being nruuie tbe same |
|
|
TTt cannot be guaranteed that |
|
|
``` |
|
|
|