AI & ML interests

Connecting individuals with innovation: Emancipating and Truly Federalizing Private Intelligence

Recent Activity

fblgit 
posted an update 3 days ago
view post
Post
90
Introducing HarEmb - PII a single-transformer-block distilled layer from OpenMed PII Privacy filter.

Its a very tiny model that reaches comparable results at PII classification thru viterbi BIOES decoding, harnessing 98%~ the original model performance while being a tiny fraction of the base model.
It doubles the performance tk/s, reduces the active params dramatically and the VRAM footprint.

The evaluation & benchmarking is within the model repository and can be reproduced. I trained it with an RTX4090 without issues and it is compatible with OpenMed suite and a in-place replacement for openai privacy-filter model.

fblgit/haremb-privacy-filter-opennemo

I'm looking for people who wants to co-author/contribute/endorse HarEmb research and the technical paper for the model.

Contact xavi@juanako.ai
Sri-Vigneshwar-DJ 
posted an update 8 days ago
view post
Post
100
![Feather DB LongMemEval Results]( Hawky-ai/longmemeval-results)

We ran Feather DB v0.8.0 on LongMemEval (ICLR 2025) — 500 questions across real multi-session conversations, up to 115K tokens each.

**Score: 0.693** · GPT-4o full-context baseline: 0.640
Full 500-question run with Gemini-Flash: **$2.40**

Per-axis breakdown:
→ Info-extraction: **0.942**
→ Knowledge-update: **0.714**
→ Multi-session: **0.606**
→ Temporal: **0.477** ← the hard one, Phase 9 addresses this

Architecture: Hybrid BM25+dense · adaptive temporal decay · embedded (no server) · p50 = 0.19ms · MIT

pip install feather-db

Raw results + audit JSONs: Hawky-ai/longmemeval-results
Tonic 
posted an update 12 days ago
view post
Post
4105
🙋🏻‍♂️ Hey there folks,

since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !

Check this one out :
NuTonic/sat-bbox-metadata-sft-v1

the goal is to be able to train vision models on multiple images for remote sensing analysis with one shot .

hope you like it ! 🚀
  • 2 replies
·
Tonic 
posted an update 17 days ago
view post
Post
3549
🙋🏻‍♂️ Hey there folks ,

I'm sharing huggingface's largest dataset of annotated statelite images today.

check it out here : NuTonic/sat-image-boundingbox-sft-full

I hope you like it , the idea is to be able to use this with small vision models 🚀
darkc0de 
posted an update 22 days ago
view post
Post
1141
For the 1 year anniversary of the public release of darkc0de/XortronCriminalComputingConfig I present "XortronOS"

Something I've been tinkering with on and off for a while. It's a simi-functional desktop environment in your browser. You can chat with Xortron, view Xortron's personal bookmarks, view the Xortron Model Spec.

Still very much a work-in-progress, just a fun toy I thought I'd share...

Open to ideas for improvement

You can visit directly, quickly, and full screen at www.xortron.tech
Or via HF at darkc0de/XortronOS

  • 4 replies
·
fblgit 
posted an update about 1 month ago
view post
Post
171
I recently built https://github.com/fblgit/eLLMulator
A software emulator for Claude Code.

eLLMulator approach:

LLM agents become your software components. Each agent deeply studies its assigned source file, then interacts with other agents via synchronous MCP tool calls that mirror real function calls. The call graph emerges naturally from code control flow, producing traces that capture not just what happened, but why each component behaved as it did.

The Claude Agent SDK provides sessions, MCP provides the bus. The code itself is the routing layer.

https://github.com/fblgit/eLLMulator
darkc0de 
posted an update 3 months ago
view post
Post
10707
1440GB of VRAM is incredibly satisfying 😁
  • 17 replies
·
mitkox 
posted an update 3 months ago
view post
Post
5604
My USB charger has a Blackwell GPU and 128GB RAM.
What. A. Time. To. Be. Alive.
People in Sofia: “It’s freezing.”
Me: sitting next to 3kW of space AI heaters on my desk 👀
1x GLM-5, 2x MiniMax-M2.5, 1x Qwen3 Coder Next; all on single Aibrix/K8s cluster
  • 6 replies
·
Tonic 
posted an update 3 months ago
view post
Post
3709
🤔 Who would win ?

- a fully subsidized ai lab
OR
- 3 random students named
kurakurai
?

demo : Tonic/fr-on-device

if you like it give the demo a little star and send a shoutout to : @MaxLSB @jddqd and @GAD-cell for absolutely obliterating the pareto frontier of the french language understanding .
  • 4 replies
·
mitkox 
posted an update 3 months ago
view post
Post
493
134,614 tok/sec input prefil max
1031 tokens/sec out gen max

At these local AI speeds, there is no User Interface for humans. My human UI is the Radicle distributed Git issues queue

On my GPU workstation:
- Z8 Fury G5 4x A6000
- MiniMax-M2.5
- Claude Code to localhost:8000
  • 1 reply
·