A newer version of the Streamlit SDK is available:
1.52.2
Deployment Guide
This guide provides detailed instructions for deploying LifeUnity AI — Cognitive Twin System to various cloud platforms.
Table of Contents
Streamlit Cloud Deployment
Streamlit Cloud is the easiest way to deploy this application.
Prerequisites
- GitHub account
- Streamlit Cloud account (free at https://share.streamlit.io)
Steps
Fork or push this repository to your GitHub account
Go to Streamlit Cloud
Click "New app"
Configure the app:
- Repository:
your-username/lifeunity-ai-cognitive-twin - Branch:
main - Main file path:
app/main.py
- Repository:
Advanced settings (optional):
- Python version: 3.9 or higher
- Environment variables: None required
Click "Deploy"
Wait for deployment (may take 5-10 minutes for first deployment)
Access your app at
https://[your-app-name].streamlit.app
Troubleshooting
- If deployment fails, check the logs in Streamlit Cloud
- Ensure all dependencies in requirements.txt are compatible
- Large models (like transformers) may take time to download on first run
Render Deployment
Render provides free web service hosting with automatic deployments.
Prerequisites
- GitHub account
- Render account (free at https://render.com)
Steps
Push this repository to GitHub
Go to Render Dashboard
Click "New +" and select "Web Service"
Connect your GitHub repository
Configure the service:
- Name:
lifeunity-ai-cognitive-twin - Environment:
Python 3 - Build Command:
pip install -r requirements.txt - Start Command:
streamlit run app/main.py --server.port=$PORT --server.address=0.0.0.0
- Name:
Select the free plan
Advanced settings:
- Auto-Deploy: Yes (recommended)
- Environment variables: None required
Click "Create Web Service"
Wait for deployment (first deployment may take 10-15 minutes)
Access your app at the provided Render URL
Using render.yaml (Alternative)
The repository includes a render.yaml file. You can:
- Go to Render Dashboard
- Click "New +" → "Blueprint"
- Connect your repository
- Render will automatically use
render.yamlconfiguration
HuggingFace Spaces Deployment
HuggingFace Spaces provides free hosting with GPU support options.
Prerequisites
- HuggingFace account (free at https://huggingface.co)
Steps
Go to HuggingFace Spaces
Click "Create new Space"
Configure the Space:
- Owner: Your username
- Space name:
lifeunity-ai-cognitive-twin - License: MIT
- Select SDK: Streamlit
- Space hardware: CPU basic (free) or GPU (paid)
- Visibility: Public or Private
Click "Create Space"
Clone the Space repository:
git clone https://huggingface.co/spaces/[your-username]/lifeunity-ai-cognitive-twinCopy files to the Space:
cp -r app requirements.txt HF_README.md [space-directory]/Rename HF_README.md to README.md:
cd [space-directory] mv HF_README.md README.mdPush to HuggingFace:
git add . git commit -m "Initial deployment" git pushWait for deployment (automatic)
Access your Space at
https://huggingface.co/spaces/[your-username]/lifeunity-ai-cognitive-twin
Docker Deployment
For self-hosting or custom cloud deployments.
Create Dockerfile
FROM python:3.9-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
curl \
software-properties-common \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements and install
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY app/ ./app/
COPY README.md .
# Expose Streamlit port
EXPOSE 8501
# Health check
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
# Run application
ENTRYPOINT ["streamlit", "run", "app/main.py", "--server.port=8501", "--server.address=0.0.0.0"]
Build and Run
# Build image
docker build -t lifeunity-ai .
# Run container
docker run -p 8501:8501 lifeunity-ai
Docker Compose (Optional)
version: '3.8'
services:
app:
build: .
ports:
- "8501:8501"
volumes:
- ./data:/app/data
- ./logs:/app/logs
restart: unless-stopped
Post-Deployment
Testing Your Deployment
- Open the application URL
- Navigate through all pages:
- Dashboard
- Mood Detection
- Cognitive Memory
- AI Insights
- Test core features:
- Upload an image for mood detection
- Add a memory note
- Generate a daily report
- Check for errors in logs
Monitoring
- Streamlit Cloud: Use built-in logs and metrics
- Render: Check logs in the Render dashboard
- HuggingFace: Use the Spaces logs tab
- Docker: Use
docker logs [container-id]
Updating Your Deployment
Streamlit Cloud & Render:
- Push changes to your GitHub repository
- Deployment updates automatically
HuggingFace Spaces:
- Push changes to the Space repository
- Rebuild happens automatically
Docker:
- Rebuild image:
docker build -t lifeunity-ai . - Restart container
Troubleshooting
Common Issues
Out of Memory:
- Use smaller models in
requirements.txt - Reduce batch sizes
- Upgrade to paid tier with more RAM
Slow First Load:
- Models download on first run (expected)
- Consider caching models in Docker image
Port Issues:
- Ensure using correct port for platform
- Streamlit Cloud: Auto-configured
- Render: Use
$PORTenvironment variable - HuggingFace: Port 7860 (default for Streamlit)
Dependency Conflicts:
- Check Python version (3.9+ recommended)
- Update requirements.txt versions
- Use virtual environment for testing
Getting Help
- Check platform-specific documentation
- Open an issue on GitHub
- Contact platform support
- Review application logs
Security Best Practices
Environment Variables:
- Store secrets in platform-specific secrets management
- Never commit sensitive data
Data Privacy:
- User data stays in deployment
- Consider adding authentication for production
Updates:
- Keep dependencies updated
- Monitor security advisories
Backups:
- Regularly backup data directories
- Use version control
For more information, see the main README.md