messing around with multiple (local) LLMs
using lm studio sdk
- Updates agent model to 'google/gemma-3-12b' for coding tasks, increasing token limit. - Integrates new file and web tools for enhanced coding capabilities. - Streamlines agent interaction with filesystem by including common file operations like reading, writing, deleting, and existence checks as tools. - Adds web interaction capabilities to agents with a new tool. - Modifies visual snapshot specification for the visual worker |
||
|---|---|---|
| act | ||
| chat | ||
| coding | ||
| common | ||
| ssr | ||
| .gitignore | ||
| bun.lock | ||
| esbuild.config.ts | ||
| package.json | ||
| README.md | ||
agentification
playing around with multiple LLMs
- Download and install LM Studio from https://lmstudio.ai/
- Make sure you have Bun installed from https://bun.sh/
- Run:
bun i # get models you want to use lms get openai/gpt-oss-20b lms get mistralai/mistral-nemo-instruct-2407 lms get liquid/lfm2-1.2b ...
Run examples:
1. turn based conversation between multiple agents
bun chat/agents-just-talk-loop.ts
2. turn based arena battle between multiple agents
bun act/agents-arena-loop.ts
3. Simulate a standup meeting between multiple agents then work separately on tasks
bun coding/agents-agile-loop.ts
#...in a seperate shell start the bundler for watching the SPA
cd coding/agile
bun run watch
#...in a seperate shell start serving the SPA
cd coding/agile
bun run serve
#...in a seperate shell start the SSR server
bun ssr/ssr-server.ts
4. One agent building a vanilla JS SPA
bun coding/agents-vanilla-loop.ts
#...in a seperate shell start serving the SPA
cd coding/vanilla
bun run serve
#...in a seperate shell start the SSR server
bun ssr/ssr-server.ts
5. One agent building a JS SPA with just visual feedback
bun coding/agents-visual-worker-loop.ts
#...in a seperate shell start serving the SPA
cd coding/visual-worker
bun run serve
#...in a seperate shell start the SSR server
bun ssr/ssr-server.ts