🦚Features | 📍Roadmap | 🛠️Contribute | 🏃Run Locally | 🌺Open Core
- Try the cloud preview ↗
- Join the Discord
- ⭐️ Help with algorithm: star this repo
You can also build and deploy yourself! However, you must configure your environment.
- Deploy to Vercel
- Docker Compose
- Build and run from source
waggledance.ai is an experimental application focused on achieving user-specified goals. It provides a friendly but opinionated user interface for building agent-based systems. The project focuses on explainability, observability, concurrent generation, and exploration. Currently in pre-alpha, the development philosophy prefers experimentation over stability as goal-solving and Agent systems are rapidly evolving.
waggledance.ai takes a goal and passes it to a Planner Agent which streams an execution graph for sub-tasks. Each sub-task is executed as concurrently as possible by Execution Agents. To reduce poor results and hallucinations, sub-results are reviewed by Criticism Agents. Eventually, the Human in the loop (you!) will be able to chat with individual Agents and provide course-corrections if needed.
It was originally inspired by Auto-GPT, and has concurrency features similar to those found in gpt-researcher. Therefore, core tenets of the project include speed, accuracy, observability, and simplicity. Additionally, many other agentic systems are written in Python, so this project acts as a small counter-balance, and is accessible to the large number of Javascript developers.
An (unstable) API is also available via tRPC as well an API implemented within Next.js. The client-side is mostly responsible for orchestrating and rendering the agent executions, while the API and server-side executes the agents and stores the results. This architecture is likely to be adjusted in the future.
- LLMs go brrr… waggledance.ai starts by planning a highly concurrent execution graph. Some sub-task branches are not dependent, and can run concurrently.
- Adversarial agents that review results.
- Vector database for long-term memory.
- Explainable results and responsive UI: Graph visualizer, sub-task (agent) results, agent logs and events.
Typescript ﹒ Langchain.js ﹒ T3 ﹒ Prisma ﹒ tRPC ﹒ Weaviate ﹒ Postgres ﹒ OpenAI API ﹒ MUI Joy
Live Project Roadmap Board ﹒ 🛠️Contribute
Basically, anything and everything goes! Though multi-agent systems have a long and storied past, this project is all about marrying the past techniques with the latest research.
waggledance.ai can be deployed locally using Docker or manually using Node.js. Configuration of .env
vars is required.
docker-compose up --build
- Required: Node JS LTS ≧ v18.17.0 (LTS recommended)
- pnpm is used in examples but
npm
oryarn
may work as well. - Recommended: Turbo -
pnpm add turbo --global
or usepnpx turbo
in place ofturbo
below.
- Copy
.env.example
to.env
and configure the environment variables. For help, please reach out on Discord. See env-schema.mjs for explicit requirements.
Refer to .env.example and env-schema.mjs for the required environment variables. Currently only Postgres via Prisma is supported. You can use a local Postgres instance (it is recommended to use Docker) or a cloud provider such as Supabase.
Once you have set up, secured, and configured your Postgres, run the following commands:
pnpm db:generate
pnpm db:push
db:generate
creates the local typings and DB info from the schema.prisma file (./packages/db/prisma/schema.prisma
).db:push
pushes the schema to the database provider (PostgreSQL by default).- Run these commands on first install and whenever you make changes to the schema.
turbo dev
# or
pnpm dev
This project was forked from create-t3-turbo To find out more, you can check the boilerplate documentation
Make sure you install the recommended extensions in the solution, particularly es-lint
.
Linting is run on each build and can fail builds.
To get a full list of linting errors run:
turbo lint
Some of these may be able to be auto-fixed with:
turbo lint:fix
And the version that the CI runs:
SKIP_ENV_VALIDATION=true NODE_ENV=production turbo build
For the rest, you will need to open the associated file and fix the errors yourself. Limit ts-ignore
for extreme cases.
As a best practice, run turbo lint
before starting a feature and after finishing a feature and fix any errors before sending a PR
.
- Devs: CONTRIBUTING.md
- Star the Project!
- Join the Discord!
- If you are not technical, you can still help improving documentation or add examples or share your user-stories with our community; any help or contribution is welcome!
- Maintainers and Contributors of LangChain.js
- Maintainers and Contributors of AutoGPT, AgentGPT, SuperAGI, gpt-researcher, lemon-agent
- E2B
- Agent Protocol from AI Engineer Foundation
- big-AGI
- more...
The applications, packages, libraries, and the entire monorepo are freely available under the MIT license. The development process is open, and everyone is welcome to join. In the future, we may choose to develop extensions that are licensed for commercial use.