cofounder
The following points are very emphasized :
To help you guide your decision on whether or not to try the current release , here is a guide
Situation | Recommendation |
---|---|
I'm not sure if this tool release is mature yet, maybe it will not work as intended and I may spend millions of tokens for nothing | Do not use it yet |
I am very excited about this tool, I hope it is perfectly production-ready, because if it's not, I will make commentary about I spent X amount on OpenAI API calls |
Do not use it yet |
I am not interested in code. I want to type words into a box and have my project completed; I do not want messy broken unfinished code | Do not use it yet |
I love exploring experimental tools but I am on the fence. It's going to break halfway and leave me sad | Do not use it yet |
Who should even try it at this point? | Nobody. Do not use it yet |
But I really want to use it for some esoteric reason having read all the above. | Do not use it yet either |
https://github.com/user-attachments/assets/cfd09250-d21e-49fc-a29b-fa0c661abfc0
https://github.com/user-attachments/assets/c055f9c4-6bc0-4b11-ba8f-cc9f149387fa
Early alpha release ; earlier than expected by few weeks
Still not merged with key target features of the project, notably :
be patient :)
npx @openinterface/cofounder
Follow the instructions. The installer
cofounder/api
builder and serverhttp://localhost:4200
) 🎉note :
you will be asked for a cofounder.openinterface.ai key
it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features
and can be used without limits during the current early alpha period
the full index will be available for local download on v1 release
node v22
for the whole project.# alternatively, you can make a new project without going through the dashboard
# by runing :
npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"
./apps/{YourApp}
Open your terminal in ./apps/{YourApp}
and runnpm i && npm run dev
It will start both the backend and vite+react, concurrently, after installing their dependencies
Go to http://localhost:5173/
to open the web app 🎉
[more details later]
If you resume later and would like to iterate on your generated apps,
the local ./cofounder/api
server needs to be running to receive queries
You can (re)start the local cofounder API
running the following command from ./cofounder/api
npm run start
The dashboard will open in http://localhost:4200
note: You can also generate new apps from the same env, without the the dashboard, by running, from ./cofounder/api
, one of these commands
npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"
npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
[the architecture will be further detailed and documented later]
Every "node" in the cofounder
architecture has a defined configuration under ./cofounder/api/system/structure/nodes/{category}/{name}.yaml
to handle things like concurrency, retries and limits per time interval
For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under ./cofounder/api/system/structure/sequences/{definition}.yaml
),
go to
#./cofounder/api/system/structure/nodes/op/llm.yaml
nodes:
op:LLM::GEN:
desc: "..."
in: [model, messages, preparser, parser, query, stream]
out: [generated, usage]
queue:
concurrency: 1 # <------------------------------- here
op:LLM::VECTORIZE:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
mapreduce: true
op:LLM::VECTORIZE:CHUNK:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
queue:
concurrency: 50
and change the op:LLM::GEN
parameter concurrency
to a higher value
The default LLM concurrency is set to 2
so you can see what's happening in your console streams step by step - but you can increment it depending on your api keys limits
[WIP]
[more details later]
archi/v1 is as follows :
cofounder/api/system/presets
)