Any plans to open source AI Sheets?

#3
by deshetti - opened

Congratulations on the great product! I wanted to check if you guys plan to open source the solution behind AI Sheets, so it could enable integrating into other data products.

deshetti changed discussion title from Any plans to open source AI Sheets to Any plans to open source AI Sheets?
Hugging Face Sheets org

Hi @deshetti , thanks!

We'd love to know more about the uses you have in mind.

Currently it's possible to deploy the app locally/on your server using a docker image, would this be useful for you?

@dvilasuero Thanks for the response. I tried running it locally using the Dockerfile, but nothing really works apart from the landing page. Some documentation for running it using Docker would be helpful.

We are currently working on a data product that we plan to open source soon. The app allows users to perform analysis on datasets using AI. We currently use a simple grid to manage the datasets. I was thinking the sheets UI you guys have is a great way to manage the datasets with AI features.

Hugging Face Sheets org

Hi @deshetti , Thanks for your messages.

We've made a step-by-step guide on how to run Sheets locally. Please, check the Running Sheets locally article and let us know if you have any questions or need further assistance.

Do you also plan to make it possible in the future to run it with a locally deployed LLM?

Hugging Face Sheets org

Thanks for your question, @chschinner

We're using the inference package to call the inference endpoints. The client supports calling local endpoints (see the docs), so it would be easy to allow this kind of setup

Thanks a lot, @frascuchon

Was not aware of this and will give it a try!

did you manage to do it @chschinner ? Happy to help otherwise

Thanks for offering help, @julien-c . I've successfully set up AI-Sheets locally with Docker, which was very straightforward. However, I'm facing a challenge getting it to use my local Ollama LLM (on port 11434) instead of relying on Hugging Face's cloud inference servers. My initial approach was to try configuring a local endpoint URL using an environment variable (like HF_INFERENCE_ENDPOINT or INFERENCE_API_URL) but this is not working. Could you provide guidance on how to properly direct ai-sheets to utilize a local LLM endpoint like Ollama in this setup?

Hugging Face Sheets org

Hi @chschinner , that's exactly what you need to do. But I realized that there is no info about the environment variable name to use a custom endpoint. To properly set up Sheets using a local, you should use the MODEL_ENDPOINT_URL environment var.

Since you're running Sheets using Docker, you should set it up to reach the service running outside Docker. For that, you can use host.docker.internal to reach the host.

If your custom ollama LLM is running on http://localhost:11434, you should define the endpoint URL as MODEL_ENDPOINT_URL=http://host.docker.internal:11434 when running the docker container.

Let me know if this works for you. We're happy to help if you struggle with it.

Thanks a lot @frascuchon ! AI-Sheets is now successfully sending requests to my local Ollama LLM. However, I'm encountering 400 Bad Request errors in the Ollama console. I suspect this might be related to a missing model specification in the requests coming from AI-Sheets. I have gemma:7b installed in Ollama. Is there another environment variable in AI-Sheets to specify which model it should ask Ollama to use (e.g., gemma:7b)?

Hugging Face Sheets org

Hi @chschinner . Can you share your ollama log? I can't run ollama locally right now. I'll try to figure out what's going on

Hugging Face Sheets org

I found this issue, and maybe it is related to what's happening https://github.com/huggingface/huggingface.js/issues/452

Hugging Face Sheets org

@chschinner I fixed this by upgrading some packages. We plan to release some improvements very soon. Meanwhile, you can use the dev image tag:

docker pull aisheets/sheets:dev

Appreciate your help @frascuchon . I tried with the dev image but I still get the 400 Bad Request errors. Below is the output that I get for every request (I tried to get more detailed logs but unfortunately this is all I got for now):

[GIN] 2025/07/10 - 08:27:21 | 400 | 1.982541ms | 127.0.0.1 | POST "/v1/chat/completions"

Hugging Face Sheets org

Hi @chschinner . I could finally run an Ollama LLM locally, and I found the problem. The model attribute wasn't sent when using custom endpoints.

To properly configure this, you should define another env var MODEL_ENDPOINT_NAME, to provide the model name for the custom endpoint. For your case, you should configure the env var as MODEL_ENDPOINT_NAME=gemma:7b

Please, update the Docker image (docker pull...) and let me know if this finally worked.

Sign up or log in to comment