project - cofounder.openinterface.ai
? @n_raidenai
cofounder
full stack generative web apps ; backend + db + stateful web apps
gen ui rooted in app architecture, with ai-guided mockup designer & modular design systems
The following points are very emphasized :
This is an EARLY, UNSTABLE, PREVIEW RELEASE of the project.
Until v1 is released, it is expected to break often.
It consumes a lot of tokens. If you are on a tokens budget, wait until v1 is released.
Again, this is an early, unstable release. A first test run. An early preview of the project's ideas. Far from completion. Open-source iterative development. Work in progress. Unstable early alpha release. [etc]
Early alpha release ; earlier than expected by 5/6 weeks
Still not merged with key target features of the project, notably :
project iteration modules for all dimensions of generated projects
admin interface for event streams and (deeper) project iterations
integrate the full genUI plugin :
generative design systems
deploy finetuned models & serve from api.cofounder
local, browser-based dev env for the entire project scope
add { react-native , flutter , other web frameworks }
validations & swarm code review and autofix
code optimization
[...]
be patient :)
Open your terminal and run
npx @openinterface/cofounder
Follow the instructions. The installer
will ask you for your keys
setup dirs & start installs
will start the local cofounder/api
builder and server
will open the web dashboard where you can create new projects (at http://localhost:4200
) ?
note : you will be asked for a cofounder.openinterface.ai key it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features and can be used without limits during the current early alpha period the full index will be available for local download on v1 release
currently using node v22
for the whole project.
# alternatively, you can make a new project without going through the dashboard# by runing :npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"
Your backend & vite+react web app will incrementally generate inside ./apps/{YourApp}
Open your terminal in ./apps/{YourApp}
and run
npm i && npm run dev
It will start both the backend and vite+react, concurrently, after installing their dependencies
Go to http://localhost:5173/
to open the web app ?
From within the generated apps , you can use ⌘+K / Ctrl+K to iterate on UI components
[more details later]
If you resume later and would like to iterate on your generated apps,
the local ./cofounder/api
server needs to be running to receive queries
You can (re)start the local cofounder API
running the following command from ./cofounder/api
npm run start
The dashboard will open in http://localhost:4200
note: You can also generate new apps from the same env, without the the dashboard, by running, from ./cofounder/api
, one of these commands
npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
[the architecture will be further detailed and documented later]
Every "node" in the cofounder
architecture has a defined configuration under ./cofounder/api/system/structure/nodes/{category}/{name}.yaml
to handle things like concurrency, retries and limits per time interval
For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under ./cofounder/api/system/structure/sequences/{definition}.yaml
),
go to
#./cofounder/api/system/structure/nodes/op/llm.yamlnodes: op:LLM::GEN: desc: "..." in: [model, messages, preparser, parser, query, stream] out: [generated, usage] queue: concurrency: 1 # <------------------------------- here op:LLM::VECTORIZE: desc: "{texts} -> {vectors}" in: [texts] out: [vectors, usage] mapreduce: true op:LLM::VECTORIZE:CHUNK: desc: "{texts} -> {vectors}" in: [texts] out: [vectors, usage] queue: concurrency: 50
and change the op:LLM::GEN
parameter concurrency
to a higher value
The default LLM concurrency is set to 2
so you can see what's happening in your console streams step by step - but you can increment it depending on your api keys limits
[WIP]
[more details later]
archi/v1 is as follows :
Demo design systems built using Figma renders / UI kits from:
blocks.pm by Hexa Plugin (see cofounder/api/system/presets
)
google material
figma core
shadcn
Dashboard node-based ui powered by react flow