This project is a web application featuring a FastAPI backend and a React frontend styled with Tailwind CSS. The backend uses a fake Large Language Model (LLM) to generate responses based on chat history. It is designed to be flexible, allowing you to easily introduce an actual LLM in the future. The application is containerized using Docker and orchestrated with Docker Compose for easy setup and deployment.
Clone the Repository
git clone https://github.com/ross-tsenov/simple-fastapi-react-tailwindcss-chatbot.git
cd simple-fastapi-react-tailwindcss-chatbot
Start the Application
In the root directory of the project, run:
docker compose up
This command builds and starts all the services defined in the docker-compose.yml
file, including the backend, frontend, and the Traefik reverse proxy.
Access the Application
Open your web browser and navigate to http://localhost to access the frontend of the application.
Path to the API would be following http://localhost/api.
To stop the application and remove the containers, networks, and volumes created by Docker Compose, run:
docker compose down
If you encounter issues with port 80
(e.g., if it's already in use), you can change the port mapping for the Traefik reverse proxy:
Modify docker-compose.yml
Open the docker-compose.yml
file.
Locate the traefik
service configuration.
In the ports
section, change the host port to a different number (e.g., 8080
):
ports:
- "8050:80"
Restart the Application
After saving the changes, restart the application:
docker compose down
docker compose up
Access the Application on the New Port
Open your web browser and navigate to http://localhost:8050.
To introduce an actual LLM into the backend:
Add a New Class Implementation
model.py
, create a new class that implements the LLMModel
protocol.Register the New Model
MODEL_REGISTRY
in model.py
.Update the Backend Configuration
Rebuild and Restart the Backend
After making changes, rebuild the backend Docker image and restart the application:
docker compose build backend
docker compose up