Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Were the original PDFs saved?

#2
by staghado - opened

quickly looking at the dataset it doesn't look like it contains the original PDF files, only URLs!

FineData org
โ€ข
edited Sep 7

It contains the text extracted from the PDFs. The actual PDFs take an extremely large amount of storage. For the non truncated PDFs you can fetch them from the commoncrawl index if you'd like using the offsets

Most Urls failed downloading

FineData org

You should fetch them from commoncrawl (for non truncated) and not from the url directly, many are no longer online

Anyone has a code snippet to download a specific subset?
Thanks in advance.

Yes @HaithemH
using streaming DS
dataset = load_dataset("HuggingFaceFW/finepdfs",name=subset_name, split=split_name, streaming=True)

and then iterate over it

I mean pdfs, I was not able to locate pdfs correponding to text in Commoncrawl

Even I am looking into downloading it from CC.
What issues you are getting btw?

problem is I'm not able to get the right pdfs corresponding to text within commoncrawl parquet.
if possible to share a code.

Here's the function to obtain PDF bytes given Common Crawl byte offsets:

import requests
import io
from warcio.archiveiterator import ArchiveIterator
from typing import Dict, Any, Optional
import logging

logger = logging.getLogger(__name__)


CC_BASE_URL = "https://data.commoncrawl.org/"

def get_pdf_from_cc_warc(metadata: Dict[str, Any]) -> Optional[bytes]:
    """
    Fetches PDF raw bytes from a Common Crawl WARC file using metadata.

    Args:
        metadata: A dictionary containing metadata for the PDF file, including:
            - cc_warc_file_name (str): The path to the WARC file on Common Crawl.
            - cc_warc_start (int): The starting byte offset of the WARC record.
            - cc_warc_end (int): The ending byte offset of the WARC record.
                                 (Exclusive, points to the byte *after* the record).

    Returns:
        Raw bytes of the PDF content if successful, otherwise None.
    """
    warc_file_path = metadata.get("cc_warc_file_name")
    start_offset = metadata.get("cc_warc_start")
    end_offset = metadata.get("cc_warc_end")

    # --- Input Validation ---
    if not warc_file_path or start_offset is None or end_offset is None:
        logger.info("Error: Missing required metadata fields (cc_warc_file_name, cc_warc_start, cc_warc_end).")
        return None

    try:
        start_offset = int(start_offset)
        end_offset = int(end_offset) # The end offset is exclusive
        if start_offset < 0 or end_offset <= start_offset:
             logger.info(f"Error: Invalid offsets: start={start_offset}, end={end_offset}")
             return None
    except (ValueError, TypeError) as e:
        logger.info(f"Error: Invalid offset type or value: {e}")
        return None

    # Calculate the range for the HTTP header (inclusive end byte)
    # Example: start=100, end=200 -> fetch bytes 100 through 199
    range_header_value = f"bytes={start_offset}-{end_offset - 1}"
    headers = {"Range": range_header_value}
    url = f"{CC_BASE_URL}{warc_file_path}"

    logger.info(f"Fetching range {range_header_value} from {url}")

    try:
        # --- Fetch the WARC Record Slice ---
        response = requests.get(url, headers=headers, stream=True, timeout=60) # Added stream=True and timeout
        response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)

        # Check if the server respected the Range request
        # Common Crawl usually returns 206 Partial Content
        if response.status_code != 206:
            logger.info(f"Warning: Expected status code 206 Partial Content, but got {response.status_code}. Proceeding anyway.")
            # You might want stricter handling here depending on expected server behavior

        # --- Parse the WARC record ---
        # We wrap the received content bytes in a BytesIO stream so warcio can read it
        record_bytes = response.content # Read the entire ranged response
        if not record_bytes:
             logger.info("Error: Received empty response content.")
             return None

        record_stream = io.BytesIO(record_bytes)

        # Use warcio to iterate over records in the stream (should only be one)
        for record in ArchiveIterator(record_stream):
            # We expect a 'response' type record for web captures
            if record.rec_type == 'response':
                # The payload of a WARC response record includes HTTP status/headers
                # followed by the actual content (the PDF).
                # warcio helps separate these. record.content_stream() gives the payload *after* HTTP headers.
                pdf_content = record.content_stream().read()

                # Optional: Basic check if it looks like a PDF
                if pdf_content.startswith(b'%PDF-'):
                    logger.info(f"Successfully extracted {len(pdf_content)} bytes of PDF data.")
                    return pdf_content
                else:
                    logger.info("Warning: Extracted content doesn't start with PDF magic bytes '%PDF-'.")
                    # Return it anyway, maybe it's valid but unusual
                    return pdf_content
            else:
                 logger.info(f"Warning: Found WARC record of type '{record.rec_type}', expected 'response'. Skipping.")

        # If loop finishes without returning, no suitable record was found
        logger.info("Error: No suitable WARC response record found in the fetched byte range.")
        return None

    except requests.exceptions.RequestException as e:
        logger.info(f"Error during HTTP request to {url}: {e}")
        return None
    except Exception as e:
        # Catch potential warcio parsing errors or other unexpected issues
        logger.info(f"An unexpected error occurred: {e}")
        return None
FineData org

Awesome work, we also provide way to access the PDFs in our codebase: https://github.com/huggingface/finepdfs

hynky changed discussion status to closed

Sign up or log in to comment