In my last post, I told you about my discovery of the Open Floor Protocol. Today I want to show you a small npm package I built to make building OFP-compliant agents easier.
Huge credits to David Attwater, who wrote the Python package I heavily relied on.
I also created a small sample parrot agent (๐ค๐ถ๐ณ๐ณ๐ฆ๐ฏ๐ต๐ญ๐บ ๐ฐ๐ฏ๐ญ๐บ ๐ช๐ฎ๐ฑ๐ญ๐ฆ๐ฎ๐ฆ๐ฏ๐ต๐ช๐ฏ๐จ ๐ต๐ฉ๐ฆ ๐ถ๐ต๐ต๐ฆ๐ณ๐ข๐ฏ๐ค๐ฆ ๐ข๐ฏ๐ฅ ๐ฎ๐ข๐ฏ๐ช๐ง๐ฆ๐ด๐ต ๐ฆ๐ท๐ฆ๐ฏ๐ต๐ด) which just repeats what you tell him. Next, I will implement a sample with multiple agents and a floor manager to show the true power of the Open Floor Protocol.
After building Consilium, my multi-AI expert consensus platform for the Gradio Agents and MCP Hackathon, Deborah Dahl introduced me to the Open Floor Protocol.
This protocol provides a standardized JSON message format for communication between conversational agents and human users across different platforms.
๐๐ฒ๐ ๐ถ๐ป๐๐ฒ๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป ๐ฝ๐ฎ๐๐๐ฒ๐ฟ๐ป๐: โ ๐๐ฒ๐น๐ฒ๐ด๐ฎ๐๐ถ๐ผ๐ป - transferring control between agents โ ๐๐ต๐ฎ๐ป๐ป๐ฒ๐น๐ถ๐ป๐ด - passing messages without modification โ ๐ ๐ฒ๐ฑ๐ถ๐ฎ๐๐ถ๐ผ๐ป - coordinating behind the scenes โ ๐ข๐ฟ๐ฐ๐ต๐ฒ๐๐๐ฟ๐ฎ๐๐ถ๐ผ๐ป - multiple agents collaborating
I am already working on a version of Consilium where you can add any Open Floor-compliant agents ๐.
The hackathon has ended, but I have updated Yuga Planner with MCP support!
Yuga Planner takes any task description, breaks it down into actionable tasks with LlamaIndex and Nebius, then schedules them automatically with Timefold.
You can schedule your task by itself or around an existing .ics file, making it fit into your previous schedule.
You can call Yuga Planner from any MCP enabled client or try it now from the Gradio live demo: blackopsrepl/yuga-planner!
๐ชฝ See It to Believe It, How QWEN4b works at On-device environment without expensive GPU Cloud server? Weโve crafted a side-by-side demo video showcasing both Jan-Nano and QWEN 4B in actionโno more wondering which model reigns supreme. Click play, compare their speeds, accuracy, and memory footprints, and decide which one fits your needs best!
๐ Why You Canโt Miss This We are actively creating runnable sLLM environments for On-device AI. You can just build On-device AI apps within few hours. Including Jan-Nano, QWEN4b, there are several sLLM models ready to be used on your AI application!.
๐ค Please feel free to use, because it is free to use!.
Ready to Compare?
Watch now, draw your own conclusions, and let us know which model youโd deploy in your next edge-AI project! ๐๐ก