This repository contains a Chatbot Translator application that leverages the t5-large
model from Hugging Face for translating text from English to French and German. The app is built using Flask for the backend and serves a Vite React Typescript frontend with Tailwind CSS.
The portfolio website is built using the following technologies:
pip
package managerClone the repository
git clone https://github.com/AlanaBF/LLM
cd LLM
Create and activate a virtual environment
python3 -m venv venv # On Windows use `python -m venv venv`
source venv/bin/activate # On Windows use `venv\Scripts\activate`
Install dependencies
cd backend
pip install -r requirements.txt
Generate the Quantized Model
Note: The quantized model file (quantized_t5_large.pth) is not included in the repository due to its size. You will need to generate it by running the provided Jupyter notebook.
Navigate to the root of the cloned repository and open the quantization.ipynb notebook using Jupyter:
jupyter notebook quantization.ipynb
Execute the cells in the notebook. This will:
Set Up and Run the Flask App
With the quantized model in place, you can now run the Flask application.
python app.py
The app should now be running on http://127.0.0.1:5000.
Translation: The app accepts English text input and translates it into French or German using a quantized t5-large model from Hugging Face. Frontend: A simple interface for interacting with the chatbot, allowing users to input text and view translated results.
The quantization.ipynb notebook demonstrates how the t5-large model was quantized to reduce its size and improve inference speed. The quantized model is saved as quantized_t5_large.pth and loaded in app.py.
Load the t5-large model from Hugging Face. Apply dynamic quantization to reduce the model's size and improve performance. Save the quantized model to quantized_t5_large.pth.
All dependencies are listed in requirements.txt. Key dependencies include:
Flask: Web framework used for serving the app. transformers: Hugging Face library for the T5 model. torch: PyTorch library, used for loading and running the model.
This project is licensed under the MIT License - see the LICENSE file for details.
Thank you for visiting my Translator App. I look forward to hearing from you. If you have any questions or need further assistance, please contact me: