indeed π€
appvoid
AI & ML interests
Recent Activity
Organizations
The first project, as far as I know, that focuses purely on few-shot prompting results rather than zero-shot like usually done with decoder-only transformer models. This model excels at few-shot tasks compared to most 0.6b and even bigger models. It also outperforms the base model on some popular language modeling benchmarks.
appvoid/arco-3
Try it yourself!
Do you have your raspberry pis and phones ready for this new model yet?
New model, new architecture, more power:
Since I'm unable to post for about 11 hours, I will post it here: https://huggingface.co/appvoid/arco-3
the issue is, we are still to find how to apply that to language space
Current transformer-based, self-supervised systems have driven massive gains, but important gaps remain on the path to AGI. Key missing pieces are continual, curiosity-driven learning; grounded multimodal perception; reliable, contextual long-term memory with forgetting; motivated (hot) executive control and dynamic attention; metacognition and coherent causal world-models; and robust fluid reasoning, planning and decision-making. Progress will require hybrid architectures (neuromorphic/Hebbian + gradients + symbolic modules), active-inference and intrinsic-motivation objectives, and new lifelong, embodied benchmarks to evaluate safety and competence.
https://huggingface.co/blog/KnutJaegersberg/whats-missing-for-agi-in-todays-tech-trajectories
all i can say to this question is i don't know, maybe it could rapidly developed into an asi and if the amount of compute for a super intelligence ends up being at human brain level or even a little more than that, then is easier to picture the implications
in you hypothetical scenario, there could be rich people that could become criminals with such power btw, corruption is universal and the power of knowing non-obvious things of reality itself due to lack of a higher intelligence can be pointed towards the masses
obviously gpt 2 was the kickstarter for openai but they didn't actually know the power of gpt4 when they created gpt 1
same thing could happen with reasoning, it might or might not have bigger implications, who knows
good point, if someone creates an ai that extrapolates to any dataset then might just make science quickly than the average damage bad guys cause
i know right?
char level text editing
beware, it might be addictive once you learn how it works: https://nohak.pythonanywhere.com/
Smart Search: Now just type what you want to doβlike "make a viral meme" or "generate music"βand our search gets it.
New Categories: Check out the cool new filter bar with icons to help you pick a category fast.
Redesigned Space Cards: Reworked a bit to really show off the app descriptions, so you know what each Space does at a glance.
Random Prompt: Need ideas? Hit the dice button for a burst of inspiration.
Weβd love to hear what you thinkβdrop us some feedback plz!
π The Lightweight Embeddings API offers state-of-the-art text and image embeddings, advanced reranking, and seamless support for over 100 languages β no limits, no restrictions.
π Try it now: lamhieu/lightweight-embeddings
I found it useful to think of AI agent design as progressing up a ladder, through evolutionary selection.
https://huggingface.co/blog/KnutJaegersberg/intelligence-potentiation
Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !
Cheers to the heroes here who see this!