https://github.com/asanchezyali/zippy-ai-bot/assets/29262782/933ce0c3-434b-45f8-8c27-6a8669da0407
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience.
Zippy seamlessly blends the power of multiple AI technologies to create a natural and engaging conversational experience:
Go to resource
to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.git clone [email protected]:Monadical-SAS/zippy-avatar-ai.git
cd zippy-avatar-ai
npm install
# or
yarn install
# AZURE
NEXT_PUBLIC_SPEECH_KEY=<YOUR_AZURE_SPEECH_KEY>
NEXT_PUBLIC_SPEECH_REGION=<YOUR_AZURE_SPEECH_REGION>
# OPENAI
NEXT_PUBLIC_OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
npm run dev
# or
yarn dev
Open http://localhost:3000 with your browser to see the result.
To learn more about Next.js, take a look at the following resources:
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out our Next.js deployment documentation for more details.