A 100% local, LLM-generated and driven virtual pet with thoughts, feelings and feedback. Revive your fond memories of Tamagotchi! https://ai-tamago.fly.dev/
All ascii animations are generated using chatgpt (included prompts in the repo).
Have questions? Join AI Stack devs and find me in #ai-tamago channel.
Demo ?
All of above, plus:
Fork the repo to your Github account, then run the following command to clone the repo:
git clone [email protected]:[YOUR_GITHUB_ACCOUNT_NAME]/AI-tamago.git
cd ai-tamago
npm install
All client side tamagotchi code is in Tamagotchi.tsx
Instructions are here.
brew install supabase/tap/supabase
Make sure you are under /ai-tamago
directory and run:
supabase start
Tips: To run migrations or reset database -- seed.sql and migrations will run
supabase db reset
Note: The secrets here are for your local supabase instance
cp .env.local.example .env.local
Then get SUPABASE_PRIVATE_KEY
by running
supabase status
Copy service_role key
and save it as SUPABASE_PRIVATE_KEY
in .env.local
npx inngest-cli@latest dev
Make sure your app is up and running -- Inngest functions (which are used to drive game state) should register automatically.
Now you are ready to test out the app locally! To do this, simply run npm run dev
under the project root and visit http://localhost:3000
.
Now you have played with the AI tamago locally -- it's time to deploy it somewhere more permanent so you can access it anytime!
0. Choose which model you want to use in production
LLM_MODEL=ollama
from .env.local
and fill in OPENAI_API_KEY
LLM_MODEL=replicate_llama
and fill in REPLICATE_API_TOKEN
performance-4x
Fly VM (CPU) with 100gb
volume, but if you can get access to GPUs they are much faster. Join Fly's GPU waitlist here if you don't yet have access!1. Switch to deploy
branch -- this branch includes everything you need to deploy an app like this.
git co deploy
This branch contains a multi-tenancy-ready (thanks to Clerk) app, which means every user can get their own AI-tamago, and has token limit built in -- you can set how many times a user can send requests in the app (see ratelimit.ts
)
2. Move to Supabase Cloud:
.env.local
SUPABASE_URL
is the URL value under "Project URL"SUPABASE_PRIVATE_KEY
is the key starts with ey
under Project API KeysFrom your Ai-tamago project root, run:
supabase link --project-ref [insert project-id]
supabase migration up
supabase db reset --linked
3. Create Upstash Redis instance for rate limiting
This will make sure no one user calls any API too many times and taking up all the inference workloads. We are using Upstash's awesome rate limiting SDK here.
UPSTASH_REDIS_REST_URL
and UPSTASH_REDIS_REST_TOKEN
) to your .env.local4. Now you are ready to deploy everything on Fly.io!
fly launch
under project root. This will generate a fly.toml
that includes all the configurations you will needfly scale memory 512
to scale up the fly vm memory for this app.fly deploy --ha=false
to deploy the app. The --ha flag makes sure fly only spins up one instance, which is included in the free plan.cat .env.local | fly secrets import
.env.prod
locally and fill in all the production-environment secrets. Remember to update NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
and CLERK_SECRET_KEY
by copying secrets from Clerk's production instance -cat .env.prod | fly secrets import
to upload secrets.If you have questions, join AI Stack devs and find me in #ai-tamago channel.