Basic Setup Tutorial
Prerequisites
- Node.js 18.0 or higher
- Python 3.11 or higher
- Git
- API keys for at least one LLM provider (Groq, Together, Deepseek, Gemini, or OpenAI)
Step 1: Clone the Repository
Start by cloning the Document Q&A repository:
git clone https://github.com/your-org/document-qa.git cd document-qa
This repository contains both the frontend and backend code needed for the Document Q&A application.
Step 2: Backend Setup
Set up the Python backend:
cd backend python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
Configure Environment Variables
Create a .env
file in the backend directory with the following variables:
# Required LLM API keys (at least one is required) GROQ_API_KEY=your_groq_api_key TOGETHER_API_KEY=your_together_api_key DEEPSEEK_API_KEY=your_deepseek_api_key GEMINI_API_KEY=your_gemini_api_key OPENAI_API_KEY=your_openai_api_key # Optional AWS configuration for metrics AWS_ACCESS_KEY_ID=your_aws_access_key AWS_SECRET_ACCESS_KEY=your_aws_secret_key AWS_REGION=us-east-1 S3_BUCKET=your_bucket_name S3_PERFORMANCE_LOGS_PREFIX=performance_logs/ # Application settings UPLOAD_DIR=./uploads MAX_FILE_SIZE=10485760 # 10MB in bytes
Step 3: Frontend Setup
Set up the Next.js frontend:
cd ../frontend npm install
Configure Environment Variables
Create a .env.local
file in the frontend directory:
NEXT_PUBLIC_API_URL=http://localhost:8001 BACKEND_URL=http://localhost:8001 NEXT_PUBLIC_BASE_URL=http://localhost:3000 NEXT_PUBLIC_ENABLE_METRICS_DASHBOARD=true NEXT_PUBLIC_ENABLE_MODEL_SELECTION=true
Step 4: Start the Application
Start the backend server:
# In the backend directory with venv activated python -m uvicorn app.main:app --reload --port 8001
Start the frontend development server:
# In the frontend directory npm run dev
The application should now be running at http://localhost:3000
Step 5: Verify Installation
To verify that everything is working correctly:
- Navigate to
http://localhost:3000
in your browser - Go to the Chat page
- Upload a document (PDF, TXT, DOC, or DOCX)
- Ask a question about the document
- Verify that you receive a response from the LLM
Next Steps
Now that you have the basic setup working, you can: