Spaces:
Running
Future of Agentic Models
Sorry for bothering anyone that will get a notification from this discussion.
In my company, we are trying different ideas regarding the development of Agentic Models and we noticed something that we summarized in this article:
The MCP Era Feels Like Déjà Vu
TLDR; we beleive that MCP is just the rediscovery of libraries/packages.
We would be happy if someone read our article and told us what he thinks. we want to hear from others in the open source community about how the future should be.
Not rediscovery, no it's more like packages for AIs
@Reality123b exactly, but should we build new packages for AI or use what we have been building for decades now ?
just write better docs for it and remove any redundancy
I'm on the fence about this one. Earlier systems can be highly optimized for hardware of the era, while the newer systems tend to lean more towards being hardware agnostic.
On one hand, many earlier systems are specifically designed for specific systems and often left to obscurity over time, and on the other some of them are intensely optimized and rapid producing. Many of these systems are often magnitudes of times faster than current, but they are often left to obscurity because of the specific labeled use case, while in reality the use cases are often considerably more general if you target agnostically rather than targeting a specific case. This has led to quite a few disruptions in coding with the LLMs and it has also led to discovering many unique new package types that I had no knowledge of.
The process requires deductive reasoning and hardware testing per package; which is most likely not going to happen in this era without using deployable systems that can test package utilization and benchmarks on alternative hardware types. With that comes the problematic substrate of delegating an actual plan for such a task, which would most often result in a required deviation per hardware interaction. Spoof-proofing, agnostic pooling, analytics per hardware, and so on. So benchmarking large-scale for optimization, which is what I like to do - isn't easy.
MANY old packages are good - some are very good and very useful. MANY are too specific. Many are simply too obscure to translate into standard use, and I am guilty of producing many of those.
I'd say some of the biggest problems rest in accessing obscure packaging rules, obscure DLL files, obscure utilities that didn't survive the transition to the github era, and more. Many packages simply don't work in modern versions of the languages, many just kind of work. Sometimes it just takes a couple lines of codes to port something, sometimes the package has already been rewritten and we have no idea.
So in short: use
dir()andhelp(), rediscover the balance between compute and memory bravo. I'm on the fence whether I prefer notifications about human or Ai generated text! You do look human though:
I'm human enough. I work with Claude a lot, so my articles are pretty handwavy and often bulk-written by claude, but the individual posts you see from me are always handwritten. Claude doesn't get to speak for me directly. Except, I think there might be one or two written by Claude but the rest are me.
I don't get it.
I went to college for programming and worked the gaming industry for uhh... 14 years-ish before I moved to AI. That doesn't just disappear, those words are from a series of pain-in-the-ass experiences over time that compounded to a statement related to LLMs and AI.
Take it how you will.
Well it looks like we built similar tools to rate text, inspect various LLM metrics, to help us take human written text from LLM outputs.
Although I must say yours looks better, I love your UI / color scheme BTW @RiverRider !
Mine is a bit aggressive because it shows whether some snippet is surprising for a given model, and formal / generic sentences are rated low even though they sometimes are human too.
For example I scored my own documentation and the tools says I write like a bot too! All bots, i'm def not any better xD
Anyway, yes essentially I agree with your post, nice one!
That is pretty cool.
Can we actually put these posts in better places... Why are we letting things fall apart?



