feat: Add Docker entrypoint scripts for SIDEL ScriptsManager

- Implemented `docker-entrypoint-debug.sh` for debugging purposes, including environment checks and database setup.
- Created `docker-entrypoint-simple.sh` for a streamlined entrypoint without debug information.
- Developed `docker-entrypoint.sh` with additional checks for directory structure and environment validation.
- Added `migrate_sqlite_to_postgresql.py` script for migrating data from SQLite to PostgreSQL, including backup and verification features.
- Created SQL scripts for initializing the PostgreSQL database, including extensions, indexes, and default data.
- Enhanced database setup procedures to support both PostgreSQL and SQLite configurations.
This commit is contained in:
Miguel 2025-09-13 19:25:33 +02:00
parent 1dea13a5ad
commit 89ae9cd773
47 changed files with 12190 additions and 1849 deletions

View File

@ -1457,7 +1457,7 @@ if __name__ == '__main__':
# Run Flask server # Run Flask server
print(f"Starting SIDEL script for project: {args.project_name} (Theme: {args.theme}, Language: {args.language})") print(f"Starting SIDEL script for project: {args.project_name} (Theme: {args.theme}, Language: {args.language})")
app.run(host='127.0.0.1', port=args.port, debug=False) app.run(host='0.0.0.0', port=args.port, debug=False)
``` ```
### Data Management Guidelines ### Data Management Guidelines
@ -1473,14 +1473,45 @@ if __name__ == '__main__':
### Flask Interface Requirements ### Flask Interface Requirements
1. **Port Binding**: Must bind to the exact port provided by SIDEL ScriptsManager 1. **Port Binding**: Must bind to the exact port provided by SIDEL ScriptsManager
2. **Host Restriction**: Bind only to `127.0.0.1` for security 2. **Docker Host Binding**: Must bind to `0.0.0.0` when running in Docker containers to allow external access
3. **Graceful Shutdown**: Handle SIGTERM for clean shutdown 3. **Local Development**: Can use `127.0.0.1` for direct host execution, but `0.0.0.0` is recommended for consistency
4. **Session Management**: Maintain user context throughout session 4. **Graceful Shutdown**: Handle SIGTERM for clean shutdown
5. **Error Reporting**: Report errors through standard logging 5. **Session Management**: Maintain user context throughout session
6. **SIDEL Branding**: Include SIDEL logo and consistent visual identity 6. **Error Reporting**: Report errors through standard logging
7. **Project Context**: Display project name prominently in interface 7. **SIDEL Branding**: Include SIDEL logo and consistent visual identity
8. **Theme Consistency**: Apply the provided theme (light/dark) throughout the interface 8. **Project Context**: Display project name prominently in interface
9. **Language Support**: Use the provided language for interface localization and messages 9. **Theme Consistency**: Apply the provided theme (light/dark) throughout the interface
10. **Language Support**: Use the provided language for interface localization and messages
### Docker Networking Requirements
For proper Docker deployment, SIDEL ScriptsManager uses **host networking mode** to ensure script interfaces are accessible:
1. **Host Network Mode**: The main application container uses `network_mode: host` in docker-compose.yml
2. **Database Connectivity**: PostgreSQL remains in bridge network mode with port mapping for isolation
3. **Script Interface Access**: Scripts binding to `0.0.0.0` are directly accessible on host ports
4. **Port Range**: Scripts use ports 5200-5400 as configured in ScriptsManager
5. **No Port Mapping**: Host networking eliminates the need for explicit port mapping in docker-compose
#### Example Docker Compose Configuration
```yaml
services:
scriptsmanager:
network_mode: host
environment:
- DATABASE_URL=postgresql://user:pass@localhost:5432/db
# No ports section needed with host networking
postgres:
ports:
- "5432:5432"
# Database keeps bridge networking for isolation
```
#### Benefits of Host Networking
- **Simplified Configuration**: No need to map script port ranges
- **Dynamic Port Allocation**: Scripts can use any available port in the configured range
- **Better Performance**: Eliminates network address translation overhead
- **Easier Debugging**: Direct access to script interfaces without port mapping complexity
## Multi-User Data Architecture ## Multi-User Data Architecture
@ -1654,17 +1685,23 @@ data/
- **WebSocket support**: For real-time log streaming - **WebSocket support**: For real-time log streaming
### Database Engine ### Database Engine
**SQLite** (Recommended for cross-platform deployment) **PostgreSQL** (Recommended for professional deployment)
- **Rationale**: - **Rationale**:
- Zero-configuration setup - Production-ready RDBMS with ACID compliance
- Cross-platform compatibility (Linux/Windows) - Better concurrent access handling for multi-user environments
- Single file database for easy backup - Advanced features: JSON columns, full-text search, indexing
- Built-in Python support - Horizontal scaling capabilities for future growth
- Sufficient performance for engineering script management - Robust backup and recovery mechanisms
- No additional server requirements - **Docker containerization**: Isolated database service with persistent volumes
- **File-based storage**: Simplifies deployment and maintenance - **Development/Production parity**: Same database engine across environments
- **Automatic backup integration**: Single file backup with system data - **Connection pooling**: Built-in support for connection management
- **Migration path**: Can upgrade to PostgreSQL if needed in future - **Migration support**: Easy schema upgrades and data migrations
**Alternative: SQLite** (For lightweight deployments)
- **Use case**: Single-user or small team environments
- **Zero-configuration**: Suitable for quick development setup
- **File-based storage**: Simplified deployment for simple use cases
- **Limitation**: Limited concurrent access and scalability
### Python Dependencies ### Python Dependencies
```bash ```bash
@ -1676,7 +1713,8 @@ flask-wtf>=1.2.0
flask-socketio>=5.3.0 flask-socketio>=5.3.0
# Database # Database
sqlite3 # Built-in with Python 3.12+ psycopg2-binary>=2.9.7 # PostgreSQL adapter for Python
SQLAlchemy>=2.0.16 # ORM with PostgreSQL support
# Web Server # Web Server
gunicorn>=21.2.0 # Production WSGI server gunicorn>=21.2.0 # Production WSGI server
@ -1708,7 +1746,82 @@ black>=23.9.0 # Code formatting
flake8>=6.1.0 # Code linting flake8>=6.1.0 # Code linting
``` ```
### Installation Script ### Docker Multi-Container Architecture
The application uses a multi-container Docker setup with **host networking** for better script interface accessibility:
#### Container Structure
```yaml
# docker-compose.yml
services:
# PostgreSQL Database Container (Bridge Network)
postgres:
image: postgres:15-alpine
container_name: scriptsmanager_postgres
environment:
POSTGRES_DB: scriptsmanager
POSTGRES_USER: scriptsmanager
POSTGRES_PASSWORD: scriptsmanager_dev_password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./sql:/docker-entrypoint-initdb.d
ports:
- "5432:5432" # Port mapping for database access
healthcheck:
test: ["CMD-SHELL", "pg_isready -U scriptsmanager"]
interval: 10s
timeout: 5s
retries: 5
# Application Container (Production) - Host Network
scriptsmanager:
build: .
network_mode: host # Critical for script interface access
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
- DEBUG=false
- PORT_RANGE_START=5200
- PORT_RANGE_END=5400
depends_on:
postgres:
condition: service_healthy
volumes:
- ./data:/app/data
- ./backup:/app/backup
- ./logs:/app/logs
- ./app/backend/script_groups:/app/app/backend/script_groups
# Application Container (Development) - Host Network
scriptsmanager-dev:
build: .
network_mode: host # Critical for script interface access
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
- DEBUG=true
- PORT_RANGE_START=5200
- PORT_RANGE_END=5400
depends_on:
postgres:
condition: service_healthy
volumes:
- .:/app # Hot reload - entire codebase mounted
- ./backup:/app/backup
- ./logs:/app/logs
volumes:
postgres_data:
driver: local
```
#### Benefits of Host Networking Architecture
- **Simplified Script Access**: Script interfaces directly accessible on host ports
- **No Port Mapping Complexity**: Dynamic port allocation without explicit mapping
- **Better Performance**: Eliminates network address translation overhead
- **Production Parity**: Same networking behavior in development and production
- **Easier Debugging**: Direct access to script interfaces for troubleshooting
- **Database Isolation**: PostgreSQL remains isolated in bridge network for security
- **Dynamic Port Range**: Scripts can use any available port in configured range (5200-5400)
- **Container Communication**: Application containers access database via localhost:5432
```bash ```bash
# Create Python 3.12+ virtual environment # Create Python 3.12+ virtual environment
python3.12 -m venv scriptsmanager_env python3.12 -m venv scriptsmanager_env
@ -1883,27 +1996,79 @@ RestartSec=10
WantedBy=multi-user.target WantedBy=multi-user.target
``` ```
### Development Setup ### Development Environment
#### Docker Development Stack
The application provides a complete Docker-based development environment with hot-reload capabilities:
```bash ```bash
# Development environment setup # Start development environment
git clone <repository-url> scriptsmanager ./docker-manage.sh start-dev
cd scriptsmanager
# Create virtual environment with Python 3.12+ # Stop development environment
python3.12 -m venv venv ./docker-manage.sh stop-dev
source venv/bin/activate # Linux/Mac
# venv\Scripts\activate # Windows
# Install development dependencies # Check logs
pip install -r requirements-dev.txt ./docker-manage.sh logs-dev
# Initialize development database # Rebuild development image
python scripts/init_dev_db.py ./docker-manage.sh build-dev
# Start development server
flask run --debug --host=127.0.0.1 --port=5000
``` ```
#### Development Features
- **Hot Reload**: Code changes automatically reflected without rebuilds
- **Database Persistence**: PostgreSQL data survives container restarts
- **Debug Support**: VS Code debugging through remote containers
- **Port Forwarding**: Application accessible at localhost:5003
- **Conda Environments**:
- `scriptsmanager`: Main Flask application
- `tsnet`: Scientific computing and analysis tools
- **Volume Mounts**:
- Source code: Live editing with hot reload
- Data directory: Persistent script storage
- Logs: Development debugging
- Backup: Development backup testing
#### Local Development Setup (Alternative)
For developers preferring local execution:
1. Install conda and create environments:
```bash
conda env create -f conda-environments.yml
conda activate scriptsmanager
```
2. Setup PostgreSQL locally:
```bash
# Install PostgreSQL
sudo apt install postgresql postgresql-contrib
# Create database and user
sudo -u postgres psql
CREATE DATABASE scriptsmanager;
CREATE USER scriptsmanager WITH PASSWORD 'dev_password';
GRANT ALL PRIVILEGES ON DATABASE scriptsmanager TO scriptsmanager;
```
3. Configure environment:
```bash
export DATABASE_URL="postgresql://scriptsmanager:dev_password@localhost:5432/scriptsmanager"
export DEBUG=true
```
4. Run application:
```bash
python scripts/run_app.py
```
#### VS Code Integration
The project includes VS Code workspace configuration for:
- Remote container development
- Python debugging with breakpoints
- Integrated terminal with conda environments
- Docker container management
- PostgreSQL database browser extensions
### Cross-Platform Considerations ### Cross-Platform Considerations
- **Path Handling**: Use `pathlib.Path` for cross-platform file operations - **Path Handling**: Use `pathlib.Path` for cross-platform file operations
- **Process Management**: Platform-specific conda activation commands - **Process Management**: Platform-specific conda activation commands

117
CLEANUP-COMPLETED.md Normal file
View File

@ -0,0 +1,117 @@
# ✅ Limpieza de Scripts Completada
## 📊 Resumen de la Operación
### 🗑️ **Scripts Eliminados**: 19 archivos
#### Scripts de Test (7 eliminados):
- `test_complete_integration.py`
- `test_hammer_browser.py`
- `test_hammer_calculations.py`
- `test_helper_functions.py`
- `test_language_switching.py`
- `test_model.py`
- `test_permissions.py`
#### Scripts de Verificación UI (5 eliminados):
- `verify_dashboard_changes.py`
- `verify_design_changes.py`
- `verify_group5.py`
- `verify_navbar_themes.py`
- `verify_sidel_logo.py`
#### Scripts de Debug/Check (7 eliminados):
- `check_complete_log.py`
- `check_db.py`
- `check_group_env.py`
- `check_log.py`
- `check_tables.py`
- `debug_discovery.py`
- `simple_debug.py`
---
### ✅ **Scripts Mantenidos**: 3 archivos funcionales
#### En directorio raíz:
- `demo_scriptsmanager_integration.py` - Demo de integración
- `example_script.py` - Script de ejemplo
- `migrate_execution_logs.py` - Migración de logs
#### Scripts esenciales conservados:
- `verify-environments.sh` - **CRÍTICO** - Verificación de entornos Docker
- `app/backend/script_groups/hammer/test_plantuml.py` - Test funcional
---
### 💾 **Backup Creado**
Todos los scripts eliminados están respaldados en:
```
backup/old_scripts/
├── 19 archivos .py (1,177 líneas totales)
└── Accesibles para restauración si es necesario
```
---
### ✅ **Verificación Post-Limpieza**
#### Sistema Docker Funcionando:
- ✅ **Contenedor iniciado** correctamente
- ✅ **Frontend accesible** en http://localhost:5002 (HTTP 302 → /login)
- ✅ **Entornos conda** funcionando:
- `scriptsmanager` - Python 3.12 + Flask 3.1.2
- `tsnet` - Python 3.12 + TSNet 0.2.2
- ✅ **Scripts de gestión** funcionando:
- `sudo ./docker-manage.sh envs`
- `sudo ./verify-environments.sh`
- `sudo ./docker-manage.sh status`
---
### 🔄 **Funcionalidad Reemplazada**
Los scripts eliminados ahora se reemplazan con:
#### En lugar de test_*.py:
```bash
sudo ./docker-manage.sh health
sudo ./verify-environments.sh
```
#### En lugar de verify_*.py:
```bash
sudo ./docker-manage.sh status
sudo ./docker-manage.sh logs
curl -I http://localhost:5002
```
#### En lugar de debug_*.py y check_*.py:
```bash
sudo ./docker-manage.sh shell
sudo ./docker-manage.sh logs
sudo ./docker-manage.sh envs
```
---
### 📈 **Impacto de la Limpieza**
- **Archivos .py en raíz**: 22 → 3 (86% reducción)
- **Líneas de código eliminadas**: 1,177 líneas
- **Mantenibilidad**: ✅ Mejorada (menos archivos obsoletos)
- **Funcionalidad**: ✅ Conservada (comandos Docker modernos)
---
### 🎯 **Estado Final**
El workspace está ahora **limpio y organizado** con:
- ✅ **Solo scripts necesarios** mantenidos
- ✅ **Funcionalidad completa** preservada
- ✅ **Sistema Docker** completamente funcional
- ✅ **Backup seguro** de scripts eliminados
- ✅ **Comandos modernos** reemplazan funcionalidad antigua
**🏁 Limpieza completada exitosamente!**

View File

@ -67,6 +67,9 @@ ENV PATH /opt/conda/envs/$CONDA_ENV_NAME/bin:$PATH
# Instalar dependencias Python en el entorno principal (ScriptsManager) # Instalar dependencias Python en el entorno principal (ScriptsManager)
RUN /opt/conda/envs/$CONDA_ENV_NAME/bin/pip install --no-cache-dir -r requirements.txt RUN /opt/conda/envs/$CONDA_ENV_NAME/bin/pip install --no-cache-dir -r requirements.txt
# Instalar psycopg2 usando conda para mejor compatibilidad
RUN /opt/conda/bin/conda install -n $CONDA_ENV_NAME -c conda-forge psycopg2 -y
# Crear entorno específico para TSNet (Water Hammer Simulator) # Crear entorno específico para TSNet (Water Hammer Simulator)
RUN conda create -n $TSNET_ENV_NAME python=3.12 -y RUN conda create -n $TSNET_ENV_NAME python=3.12 -y
@ -101,45 +104,18 @@ RUN if [ -d "backend/script_groups" ]; then \
fi fi
# Configurar usuario y permisos para evitar problemas con volúmenes # Configurar usuario y permisos para evitar problemas con volúmenes
RUN groupadd -r scriptsmanager && useradd -r -g scriptsmanager -d /app -s /bin/bash scriptsmanager # Usar UID 1000 para compatibilidad con el usuario del host
RUN groupadd -g 1000 scriptsmanager && useradd -u 1000 -g 1000 -d /app -s /bin/bash scriptsmanager
# Establecer permisos correctos # Establecer permisos correctos y crear directorios para el usuario scriptsmanager
RUN chmod +x scripts/*.py && \ RUN chmod +x scripts/*.py && \
mkdir -p data instance logs/{executions,system,audit} backup/daily && \
chown -R scriptsmanager:scriptsmanager $APP_HOME && \ chown -R scriptsmanager:scriptsmanager $APP_HOME && \
chmod 755 $DATA_HOME $BACKUP_HOME $LOGS_HOME && \ chmod 755 $DATA_HOME $BACKUP_HOME $LOGS_HOME && \
chown -R scriptsmanager:scriptsmanager $DATA_HOME $BACKUP_HOME $LOGS_HOME chown -R scriptsmanager:scriptsmanager $DATA_HOME $BACKUP_HOME $LOGS_HOME
# Crear script de inicialización específico para SIDEL ScriptsManager # Copiar script de inicialización específico para SIDEL ScriptsManager
RUN echo '#!/bin/bash\n\ COPY docker-entrypoint-debug.sh /app/docker-entrypoint.sh
set -e\n\
echo "=== SIDEL ScriptsManager Initialization ==="\n\
source activate scriptsmanager\n\
cd /app\n\
\n\
# Verificar estructura de directorios\n\
echo "Checking directory structure..."\n\
if [ ! -d "app/backend/script_groups" ]; then\n\
echo "ERROR: app/backend/script_groups directory not found!"\n\
exit 1\n\
fi\n\
\n\
# Inicializar base de datos SQLite\n\
echo "Initializing SQLite database..."\n\
python scripts/init_db.py\n\
\n\
# Verificar entornos conda\n\
echo "Available conda environments:"\n\
conda env list\n\
\n\
echo "ScriptsManager environment packages:"\n\
conda list -n scriptsmanager | grep -E "(flask|sqlalchemy)" || true\n\
\n\
echo "TSNet environment packages:"\n\
conda list -n tsnet | grep -E "(tsnet|numpy|matplotlib)" || true\n\
\n\
echo "=== Starting SIDEL ScriptsManager ==="\n\
exec "$@"' > /app/docker-entrypoint.sh
RUN chmod +x /app/docker-entrypoint.sh RUN chmod +x /app/docker-entrypoint.sh
# Puerto principal del frontend (5002) y rango dinámico (5200-5400) # Puerto principal del frontend (5002) y rango dinámico (5200-5400)

View File

@ -0,0 +1,496 @@
#!/bin/bash
# Script de gestión Docker para SIDEL ScriptsManager
# Compatible con las especificaciones del proyecto
# Uso: ./docker-manage.sh [comando]
set -e
# Colores para output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
NC='\033[0m' # No Color
# Variables del proyecto SIDEL
SIDEL_APP_PORT=5002
SIDEL_DEV_PORT=5003
SIDEL_SCRIPT_PORT_RANGE="5200-5400"
SIDEL_CONTAINER_NAME="sidel_scriptsmanager"
SIDEL_DEV_CONTAINER_NAME="sidel_scriptsmanager_dev"
# Función para mostrar banner SIDEL
show_banner() {
echo -e "${BLUE}================================================================${NC}"
echo -e "${BLUE} SIDEL ScriptsManager - Docker Management ${NC}"
echo -e "${BLUE}================================================================${NC}"
echo -e "${PURPLE}Multi-User Script Manager with Conda Environments${NC}"
echo -e "${PURPLE}Frontend Port: ${SIDEL_APP_PORT} | Script Ports: ${SIDEL_SCRIPT_PORT_RANGE}${NC}"
echo -e "${BLUE}================================================================${NC}"
echo ""
}
# Función para mostrar ayuda
show_help() {
show_banner
echo "Uso: $0 [comando]"
echo ""
echo "Comandos disponibles:"
echo -e " ${GREEN}build${NC} Construir la imagen Docker"
echo -e " ${GREEN}start${NC} Iniciar SIDEL ScriptsManager en producción"
echo -e " ${GREEN}start-dev${NC} Iniciar en modo desarrollo con hot-reload"
echo -e " ${GREEN}start-backup${NC} Iniciar servicio de backup automático"
echo -e " ${GREEN}start-monitoring${NC} Iniciar servicio de monitoreo de logs"
echo -e " ${GREEN}stop${NC} Detener todos los contenedores"
echo -e " ${GREEN}restart${NC} Reiniciar el contenedor principal"
echo -e " ${GREEN}logs${NC} Mostrar logs del contenedor principal"
echo -e " ${GREEN}logs-dev${NC} Mostrar logs del contenedor de desarrollo"
echo -e " ${GREEN}shell${NC} Abrir shell en el contenedor principal"
echo -e " ${GREEN}shell-dev${NC} Abrir shell en el contenedor de desarrollo"
echo -e " ${GREEN}backup${NC} Ejecutar backup manual del sistema"
echo -e " ${GREEN}clean${NC} Limpiar contenedores e imágenes no utilizadas"
echo -e " ${GREEN}reset${NC} Resetear completamente (¡CUIDADO: Borra datos!)"
echo -e " ${GREEN}status${NC} Mostrar estado de los contenedores"
echo -e " ${GREEN}envs${NC} Listar entornos conda disponibles"
echo -e " ${GREEN}health${NC} Verificar salud de la aplicación"
echo -e " ${GREEN}init-db${NC} Inicializar base de datos SIDEL"
echo -e " ${GREEN}verify${NC} Verificar configuración y entornos"
echo -e " ${GREEN}ports${NC} Mostrar puertos en uso"
echo -e " ${GREEN}users${NC} Gestionar usuarios (requiere shell activo)"
echo ""
echo "Servicios opcionales (perfiles):"
echo -e " ${YELLOW}--profile dev${NC} Modo desarrollo"
echo -e " ${YELLOW}--profile backup${NC} Backup automático"
echo -e " ${YELLOW}--profile monitoring${NC} Monitoreo de logs"
echo ""
echo "Ejemplos:"
echo " $0 build && $0 start"
echo " $0 start-dev"
echo " $0 logs -f"
echo " $0 verify"
}
# Función para verificar si docker-compose está disponible
check_docker_compose() {
# Verificar permisos de Docker primero
if ! docker ps &> /dev/null; then
echo -e "${RED}Error: No tienes permisos para acceder a Docker${NC}"
echo -e "${YELLOW}Soluciones posibles:${NC}"
echo "1. Reinicia tu terminal/WSL después de agregar tu usuario al grupo docker"
echo "2. O ejecuta: sudo usermod -aG docker \$USER && newgrp docker"
echo "3. O usa sudo: sudo ./docker-manage.sh [comando]"
echo "4. O ejecuta: su - \$USER (para recargar grupos)"
exit 1
fi
if command -v docker-compose &> /dev/null; then
DOCKER_COMPOSE="docker-compose"
elif docker compose version &> /dev/null; then
DOCKER_COMPOSE="docker compose"
else
echo -e "${RED}Error: docker-compose no está disponible${NC}"
echo -e "${YELLOW}Instalando docker-compose...${NC}"
# Intentar instalar docker-compose
if command -v apt &> /dev/null; then
sudo apt update && sudo apt install -y docker-compose
elif command -v yum &> /dev/null; then
sudo yum install -y docker-compose
else
echo "Por favor instala docker-compose manualmente"
exit 1
fi
# Verificar nuevamente
if command -v docker-compose &> /dev/null; then
DOCKER_COMPOSE="docker-compose"
else
exit 1
fi
fi
}
# Función para construir la imagen
build_image() {
show_banner
echo -e "${BLUE}Construyendo imagen Docker para SIDEL ScriptsManager...${NC}"
check_docker_compose
# Verificar estructura de directorios antes de construir
if [ -d "backend/script_groups" ]; then
echo -e "${RED}❌ ERROR: Encontrado directorio incorrecto 'backend/script_groups/'${NC}"
echo -e "${RED} Según especificaciones SIDEL, los scripts deben estar en 'app/backend/script_groups/' únicamente${NC}"
exit 1
fi
if [ ! -d "app/backend/script_groups" ]; then
echo -e "${YELLOW}⚠️ Creando directorio app/backend/script_groups/...${NC}"
mkdir -p app/backend/script_groups/hammer
mkdir -p app/backend/script_groups/data_processing
mkdir -p app/backend/script_groups/system_utilities
fi
$DOCKER_COMPOSE build scriptsmanager
echo -e "${GREEN}✅ Imagen SIDEL ScriptsManager construida exitosamente${NC}"
}
# Función para iniciar en producción
start_production() {
show_banner
echo -e "${BLUE}Iniciando SIDEL ScriptsManager en modo producción...${NC}"
check_docker_compose
# Crear directorios necesarios según especificaciones SIDEL
echo -e "${BLUE}Preparando estructura de directorios...${NC}"
mkdir -p data/script_groups data/system
mkdir -p logs/executions logs/system logs/audit
mkdir -p backup/daily
mkdir -p instance
# Copiar archivo de entorno si no existe
if [ ! -f .env ]; then
echo -e "${YELLOW}Creando archivo .env desde .env.example${NC}"
cp .env.example .env
echo -e "${YELLOW}¡IMPORTANTE: Edita el archivo .env con tus configuraciones de producción!${NC}"
fi
$DOCKER_COMPOSE up -d scriptsmanager
echo -e "${GREEN}✅ SIDEL ScriptsManager iniciado en http://localhost:${SIDEL_APP_PORT}${NC}"
echo -e "${BLUE}📊 Dashboard multiusuario disponible${NC}"
echo -e "${BLUE}🔧 Scripts TSNet en puertos ${SIDEL_SCRIPT_PORT_RANGE}${NC}"
}
# Función para iniciar en desarrollo
start_development() {
show_banner
echo -e "${BLUE}Iniciando SIDEL ScriptsManager en modo desarrollo...${NC}"
check_docker_compose
mkdir -p data/script_groups data/system
mkdir -p logs/executions logs/system logs/audit
mkdir -p backup/daily
if [ ! -f .env ]; then
cp .env.example .env
fi
$DOCKER_COMPOSE --profile dev up -d scriptsmanager-dev
echo -e "${GREEN}✅ SIDEL ScriptsManager (desarrollo) iniciado en http://localhost:${SIDEL_DEV_PORT}${NC}"
echo -e "${BLUE}🔄 Hot-reload activado para desarrollo${NC}"
}
# Función para iniciar backup automático
start_backup_service() {
show_banner
echo -e "${BLUE}Iniciando servicio de backup automático...${NC}"
check_docker_compose
$DOCKER_COMPOSE --profile backup up -d backup
echo -e "${GREEN}✅ Servicio de backup automático iniciado${NC}"
echo -e "${BLUE}📦 Backups diarios programados${NC}"
}
# Función para iniciar monitoreo
start_monitoring() {
show_banner
echo -e "${BLUE}Iniciando servicio de monitoreo de logs...${NC}"
check_docker_compose
$DOCKER_COMPOSE --profile monitoring up -d log-monitor
echo -e "${GREEN}✅ Servicio de monitoreo iniciado${NC}"
echo -e "${BLUE}📊 Monitoreo de logs multiusuario activado${NC}"
}
# Función para detener contenedores
stop_containers() {
echo -e "${BLUE}Deteniendo contenedores...${NC}"
check_docker_compose
$DOCKER_COMPOSE down
echo -e "${GREEN}Contenedores detenidos${NC}"
}
# Función para reiniciar
restart_container() {
echo -e "${BLUE}Reiniciando contenedor principal...${NC}"
check_docker_compose
$DOCKER_COMPOSE restart scriptsmanager
echo -e "${GREEN}Contenedor reiniciado${NC}"
}
# Función para mostrar logs
show_logs() {
check_docker_compose
$DOCKER_COMPOSE logs "${@:2}" scriptsmanager
}
# Función para mostrar logs de desarrollo
show_dev_logs() {
check_docker_compose
$DOCKER_COMPOSE logs "${@:2}" scriptsmanager-dev
}
# Función para abrir shell
open_shell() {
check_docker_compose
echo -e "${BLUE}Abriendo shell en el contenedor...${NC}"
$DOCKER_COMPOSE exec scriptsmanager bash
}
# Función para abrir shell de desarrollo
open_dev_shell() {
check_docker_compose
echo -e "${BLUE}Abriendo shell en el contenedor de desarrollo...${NC}"
$DOCKER_COMPOSE exec scriptsmanager-dev bash
}
# Función para backup manual
manual_backup() {
echo -e "${BLUE}Ejecutando backup manual...${NC}"
check_docker_compose
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && python -c 'from app.services.backup_service import BackupService; BackupService().create_backup()'"
echo -e "${GREEN}Backup completado${NC}"
}
# Función para limpiar Docker
clean_docker() {
echo -e "${YELLOW}Limpiando contenedores e imágenes no utilizadas...${NC}"
docker system prune -f
echo -e "${GREEN}Limpieza completada${NC}"
}
# Función para reset completo
reset_all() {
echo -e "${RED}¡ADVERTENCIA! Esto eliminará todos los datos y contenedores.${NC}"
read -p "¿Estás seguro? (escribe 'yes' para continuar): " -r
if [[ $REPLY == "yes" ]]; then
check_docker_compose
$DOCKER_COMPOSE down -v
docker system prune -af
sudo rm -rf data/* backup/* logs/*
echo -e "${GREEN}Reset completado${NC}"
else
echo -e "${YELLOW}Operación cancelada${NC}"
fi
}
# Función para mostrar estado
show_status() {
echo -e "${BLUE}Estado de los contenedores:${NC}"
check_docker_compose
$DOCKER_COMPOSE ps
}
# Función para listar entornos conda
list_conda_envs() {
echo -e "${BLUE}Entornos conda disponibles:${NC}"
check_docker_compose
$DOCKER_COMPOSE exec scriptsmanager bash -c "conda env list"
}
# Función para verificar salud
health_check() {
echo -e "${BLUE}Verificando salud de SIDEL ScriptsManager...${NC}"
if curl -f http://localhost:${SIDEL_APP_PORT}/health >/dev/null 2>&1; then
echo -e "${GREEN}✓ Aplicación saludable en puerto ${SIDEL_APP_PORT}${NC}"
else
echo -e "${RED}✗ Aplicación no responde en puerto ${SIDEL_APP_PORT}${NC}"
exit 1
fi
}
# Función para inicializar base de datos SIDEL
init_database() {
show_banner
echo -e "${BLUE}Inicializando base de datos SIDEL ScriptsManager...${NC}"
check_docker_compose
if ! docker ps | grep -q $SIDEL_CONTAINER_NAME; then
echo -e "${YELLOW}Contenedor no está ejecutándose. Iniciando temporalmente...${NC}"
$DOCKER_COMPOSE up -d scriptsmanager
sleep 10
fi
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && python scripts/init_sidel_db.py"
echo -e "${GREEN}✅ Base de datos SIDEL inicializada${NC}"
}
# Función para verificar configuración completa
verify_configuration() {
show_banner
echo -e "${BLUE}Verificando configuración SIDEL ScriptsManager...${NC}"
echo -e "${BLUE}📁 Verificando estructura de directorios...${NC}"
# Verificar directorios críticos
if [ ! -d "app/backend/script_groups" ]; then
echo -e "${RED}❌ app/backend/script_groups/ no encontrado${NC}"
return 1
else
echo -e "${GREEN}✅ app/backend/script_groups/ correcto${NC}"
fi
if [ -d "backend/script_groups" ]; then
echo -e "${RED}❌ backend/script_groups/ existe (NO debería existir)${NC}"
return 1
else
echo -e "${GREEN}✅ backend/script_groups/ no existe (correcto)${NC}"
fi
# Verificar archivos de configuración
if [ -f "requirements.txt" ]; then
echo -e "${GREEN}✅ requirements.txt encontrado${NC}"
else
echo -e "${RED}❌ requirements.txt no encontrado${NC}"
fi
if [ -f "app/backend/script_groups/hammer/requirements.txt" ]; then
echo -e "${GREEN}✅ TSNet requirements.txt encontrado${NC}"
else
echo -e "${RED}❌ TSNet requirements.txt no encontrado${NC}"
fi
# Verificar contenedor si está ejecutándose
if docker ps | grep -q $SIDEL_CONTAINER_NAME; then
echo -e "${BLUE}🐳 Verificando entornos conda en contenedor...${NC}"
# Verificar entornos conda
echo -e "${BLUE}📋 Entornos conda disponibles:${NC}"
$DOCKER_COMPOSE exec scriptsmanager conda env list
echo -e "${BLUE}🔍 Verificando paquetes en entorno scriptsmanager:${NC}"
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && python -c 'import flask, flask_socketio; print(f\"Flask: {flask.__version__}, SocketIO: {flask_socketio.__version__}\")'"
echo -e "${BLUE}🔍 Verificando paquetes en entorno tsnet:${NC}"
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate tsnet && python -c 'import numpy, matplotlib; print(f\"NumPy: {numpy.__version__}, Matplotlib: {matplotlib.__version__}\")'"
# Verificar puertos
echo -e "${BLUE}🔌 Verificando puertos:${NC}"
if curl -s http://localhost:${SIDEL_APP_PORT} >/dev/null; then
echo -e "${GREEN}✅ Puerto ${SIDEL_APP_PORT} (frontend) accesible${NC}"
else
echo -e "${YELLOW}⚠️ Puerto ${SIDEL_APP_PORT} (frontend) no accesible${NC}"
fi
else
echo -e "${YELLOW}⚠️ Contenedor no está ejecutándose${NC}"
echo -e "${BLUE}💡 Ejecuta: $0 start${NC}"
fi
echo -e "${GREEN}✅ Verificación completada${NC}"
}
# Función para mostrar puertos en uso
show_ports() {
show_banner
echo -e "${BLUE}Estado de puertos SIDEL ScriptsManager:${NC}"
echo ""
echo -e "${BLUE}Puerto Frontend:${NC} ${SIDEL_APP_PORT}"
echo -e "${BLUE}Puerto Desarrollo:${NC} ${SIDEL_DEV_PORT}"
echo -e "${BLUE}Rango Scripts:${NC} ${SIDEL_SCRIPT_PORT_RANGE}"
echo ""
if command -v netstat >/dev/null 2>&1; then
echo -e "${BLUE}Puertos actualmente en uso:${NC}"
netstat -tlnp 2>/dev/null | grep -E ":(5002|5003|520[0-9]|53[0-9][0-9]|5400)" || echo "Ningún puerto SIDEL en uso"
elif command -v ss >/dev/null 2>&1; then
echo -e "${BLUE}Puertos actualmente en uso:${NC}"
ss -tlnp | grep -E ":(5002|5003|520[0-9]|53[0-9][0-9]|5400)" || echo "Ningún puerto SIDEL en uso"
else
echo -e "${YELLOW}⚠️ netstat/ss no disponible para verificar puertos${NC}"
fi
}
# Función para gestión de usuarios
manage_users() {
show_banner
echo -e "${BLUE}Gestión de usuarios SIDEL ScriptsManager${NC}"
if ! docker ps | grep -q $SIDEL_CONTAINER_NAME; then
echo -e "${RED}❌ El contenedor no está ejecutándose${NC}"
echo -e "${BLUE}💡 Ejecuta: $0 start${NC}"
return 1
fi
echo -e "${BLUE}Abriendo shell para gestión de usuarios...${NC}"
echo -e "${YELLOW}Comandos útiles:${NC}"
echo " - python scripts/create_admin.py --username <user> --password <pass>"
echo " - python scripts/list_users.py"
echo " - python scripts/manage_users.py"
echo ""
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && bash"
}
# Script principal
case "${1:-help}" in
build)
build_image
;;
start)
start_production
;;
start-dev)
start_development
;;
start-backup)
start_backup_service
;;
start-monitoring)
start_monitoring
;;
stop)
stop_containers
;;
restart)
restart_container
;;
logs)
show_logs "$@"
;;
logs-dev)
show_dev_logs "$@"
;;
shell)
open_shell
;;
shell-dev)
open_dev_shell
;;
backup)
manual_backup
;;
clean)
clean_docker
;;
reset)
reset_all
;;
status)
show_status
;;
envs)
list_conda_envs
;;
health)
health_check
;;
init-db)
init_database
;;
verify)
verify_configuration
;;
ports)
show_ports
;;
users)
manage_users
;;
help|--help|-h)
show_help
;;
*)
echo -e "${RED}Comando desconocido: $1${NC}"
show_help
exit 1
;;
esac

View File

@ -1719,14 +1719,16 @@ def main():
if "WERKZEUG_RUN_MAIN" in os.environ: if "WERKZEUG_RUN_MAIN" in os.environ:
del os.environ["WERKZEUG_RUN_MAIN"] del os.environ["WERKZEUG_RUN_MAIN"]
execution_logger.log_info("About to start SocketIO server")
socketio.run( socketio.run(
app, app,
host="127.0.0.1", host="0.0.0.0",
port=args.port, port=args.port,
debug=False, debug=False,
allow_unsafe_werkzeug=True, allow_unsafe_werkzeug=True,
use_reloader=False, # Disable reloader to avoid fd conflicts use_reloader=False, # Disable reloader to avoid fd conflicts
) )
execution_logger.log_info("SocketIO server ended")
except KeyboardInterrupt: except KeyboardInterrupt:
execution_logger.log_session_event("interrupted_by_user") execution_logger.log_session_event("interrupted_by_user")

View File

@ -1,5 +1,6 @@
import os import os
from pathlib import Path from pathlib import Path
import urllib.parse
# Base directory for the project # Base directory for the project
BASE_DIR = Path(__file__).parent.parent.parent BASE_DIR = Path(__file__).parent.parent.parent
@ -14,6 +15,43 @@ class Config:
) )
SQLALCHEMY_TRACK_MODIFICATIONS = False SQLALCHEMY_TRACK_MODIFICATIONS = False
# PostgreSQL-specific database configuration
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': 10,
'pool_timeout': 20,
'pool_recycle': -1,
'max_overflow': 0,
'pool_pre_ping': True,
}
# Additional database settings for PostgreSQL
SQLALCHEMY_ECHO = os.getenv("SQLALCHEMY_ECHO", "False").lower() == "true"
@staticmethod
def get_database_config():
"""Get database configuration based on DATABASE_URL."""
database_url = os.getenv("DATABASE_URL", f"sqlite:///{BASE_DIR}/data/scriptsmanager.db")
if database_url.startswith('postgresql://'):
# Parse PostgreSQL URL
parsed = urllib.parse.urlparse(database_url)
return {
'engine': 'postgresql',
'host': parsed.hostname,
'port': parsed.port or 5432,
'database': parsed.path[1:], # Remove leading slash
'username': parsed.username,
'password': parsed.password,
'url': database_url
}
else:
# SQLite configuration (fallback)
return {
'engine': 'sqlite',
'url': database_url,
'file': database_url.replace('sqlite:///', '')
}
# Application Settings # Application Settings
SECRET_KEY = os.getenv("SECRET_KEY", "your-secret-key-change-in-production") SECRET_KEY = os.getenv("SECRET_KEY", "your-secret-key-change-in-production")
DEBUG = os.getenv("DEBUG", "False").lower() == "true" DEBUG = os.getenv("DEBUG", "False").lower() == "true"
@ -79,18 +117,59 @@ class DevelopmentConfig(Config):
DEBUG = True DEBUG = True
# Development-specific database settings
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': 5,
'pool_timeout': 10,
'pool_recycle': 300,
'max_overflow': 0,
'pool_pre_ping': True,
'echo': True, # Log SQL queries in development
}
class ProductionConfig(Config): class ProductionConfig(Config):
"""Production configuration.""" """Production configuration."""
DEBUG = False DEBUG = False
# Production-specific database settings with connection pooling
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': 20,
'pool_timeout': 30,
'pool_recycle': 3600, # Recycle connections every hour
'max_overflow': 10,
'pool_pre_ping': True,
'echo': False, # Disable SQL logging in production
}
# Production security enhancements
SECURITY_CONFIG = {
**Config.SECURITY_CONFIG,
"enable_project_sharing": True, # Enable in production
"session_cookie_secure": True,
"session_cookie_httponly": True,
"session_cookie_samesite": "Lax",
}
class TestingConfig(Config): class TestingConfig(Config):
"""Testing configuration.""" """Testing configuration."""
TESTING = True TESTING = True
DATABASE_URL = "sqlite:///:memory:" SQLALCHEMY_DATABASE_URI = "sqlite:///:memory:"
# Testing-specific settings
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': 1,
'pool_timeout': 5,
'pool_recycle': -1,
'max_overflow': 0,
'pool_pre_ping': False,
}
# Disable backup in testing
BACKUP_ENABLED = False
# Configuration dictionary # Configuration dictionary

View File

@ -1,10 +1,32 @@
from flask_sqlalchemy import SQLAlchemy from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager from flask_login import LoginManager
from datetime import datetime from datetime import datetime
import os
import logging
from sqlalchemy.engine import Engine
from sqlalchemy import event, text
import sqlite3
db = SQLAlchemy() db = SQLAlchemy()
login_manager = LoginManager() login_manager = LoginManager()
# Configure logging for database operations
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, connection_record):
"""Set SQLite-specific pragmas for better performance and foreign key support."""
if 'sqlite' in str(dbapi_connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON")
cursor.execute("PRAGMA journal_mode=WAL")
cursor.execute("PRAGMA synchronous=NORMAL")
cursor.execute("PRAGMA cache_size=10000")
cursor.execute("PRAGMA temp_store=MEMORY")
cursor.close()
def init_db(app): def init_db(app):
"""Initialize database with Flask app.""" """Initialize database with Flask app."""
@ -14,6 +36,109 @@ def init_db(app):
login_manager.login_message = "Please log in to access this page." login_manager.login_message = "Please log in to access this page."
with app.app_context(): with app.app_context():
db.create_all() try:
# Get database configuration
from .config import Config
db_config = Config.get_database_config()
if db_config['engine'] == 'postgresql':
logger.info(f"Connecting to PostgreSQL database: {db_config['host']}:{db_config['port']}/{db_config['database']}")
# Test PostgreSQL connection
try:
with db.engine.connect() as connection:
connection.execute(text("SELECT 1"))
logger.info("PostgreSQL connection successful")
except Exception as e:
logger.error(f"PostgreSQL connection failed: {e}")
raise
elif db_config['engine'] == 'sqlite':
logger.info(f"Using SQLite database: {db_config.get('file', ':memory:')}")
# Create all tables
db.create_all()
logger.info("Database tables created successfully")
except Exception as e:
logger.error(f"Database initialization failed: {e}")
raise
return db return db
def get_db_info():
"""Get database connection information."""
try:
from .config import Config
db_config = Config.get_database_config()
if db_config['engine'] == 'postgresql':
# Get PostgreSQL version and connection info
with db.engine.connect() as connection:
result = connection.execute(text("SELECT version()"))
version = result.fetchone()[0]
return {
'engine': 'PostgreSQL',
'version': version.split()[1],
'host': db_config['host'],
'port': db_config['port'],
'database': db_config['database'],
'connection_pool_size': db.engine.pool.size(),
'checked_out_connections': db.engine.pool.checkedout(),
}
elif db_config['engine'] == 'sqlite':
# Get SQLite version
with db.engine.connect() as connection:
result = connection.execute(text("SELECT sqlite_version()"))
version = result.fetchone()[0]
return {
'engine': 'SQLite',
'version': version,
'file': db_config.get('file', ':memory:'),
}
except Exception as e:
logger.error(f"Failed to get database info: {e}")
return {'error': str(e)}
def check_db_health():
"""Check database health and connectivity."""
try:
from .config import Config
db_config = Config.get_database_config()
if db_config['engine'] == 'postgresql':
# Check PostgreSQL health
with db.engine.connect() as connection:
result = connection.execute(text("SELECT 1"))
result.fetchone()
# Check connection pool status
pool = db.engine.pool
return {
'status': 'healthy',
'engine': 'PostgreSQL',
'pool_size': pool.size(),
'checked_out': pool.checkedout(),
'overflow': pool.overflow(),
'checked_in': pool.checkedin(),
}
elif db_config['engine'] == 'sqlite':
# Check SQLite health
with db.engine.connect() as connection:
result = connection.execute(text("SELECT 1"))
result.fetchone()
return {
'status': 'healthy',
'engine': 'SQLite',
}
except Exception as e:
logger.error(f"Database health check failed: {e}")
return {
'status': 'unhealthy',
'error': str(e)
}

View File

@ -0,0 +1,6 @@
{
"created_at": "2025-09-13T14:39:16.163602",
"last_modified": "2025-09-13T14:39:16.163613",
"project_settings": {},
"user_preferences": {}
}

View File

@ -0,0 +1,19 @@
{
"pipe_length": 300,
"pipe_diameter": 0.065,
"wall_thickness": 0.003,
"roughness": 1.5e-06,
"flow_rate": 22000,
"pump_pressure": 7,
"fluid_density": 1100,
"fluid_temperature": 20,
"bulk_modulus": 2200000000,
"young_modulus": 200000000000,
"closure_time": 2,
"damper_volume": 50,
"damper_precharge": 4,
"damper_gas_percentage": 60,
"damper_position": 280,
"simulation_time": 10,
"damper_enabled": false
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,6 @@
{
"created_at": "2025-09-13T14:35:54.726752",
"last_modified": "2025-09-13T14:35:54.726758",
"project_settings": {},
"user_preferences": {}
}

View File

@ -0,0 +1,6 @@
{
"created_at": "2025-09-13T14:13:18.131807",
"last_modified": "2025-09-13T14:13:18.131814",
"project_settings": {},
"user_preferences": {}
}

View File

@ -0,0 +1,19 @@
{
"pipe_length": 300,
"pipe_diameter": 0.065,
"wall_thickness": 0.003,
"roughness": 1.5e-06,
"flow_rate": 22000,
"pump_pressure": 7,
"fluid_density": 1100,
"fluid_temperature": 20,
"bulk_modulus": 2200000000,
"young_modulus": 200000000000,
"closure_time": 2,
"damper_volume": 50,
"damper_precharge": 4,
"damper_gas_percentage": 60,
"damper_position": 280,
"simulation_time": 10,
"damper_enabled": false
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -1,39 +1,66 @@
version: '3.8' version: '3.8'
services: services:
# Servicio principal de SIDEL ScriptsManager # PostgreSQL Database Service
postgres:
image: postgres:15-alpine
container_name: scriptsmanager_postgres
environment:
POSTGRES_DB: scriptsmanager
POSTGRES_USER: scriptsmanager
POSTGRES_PASSWORD: scriptsmanager_dev_password
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --locale=C"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./sql:/docker-entrypoint-initdb.d:ro
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U scriptsmanager -d scriptsmanager"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
restart: unless-stopped
# Production Application Service
scriptsmanager: scriptsmanager:
build: . build: .
container_name: sidel_scriptsmanager container_name: scriptsmanager_app
network_mode: host # Usar red host para acceso directo a todos los puertos dinámicos network_mode: host
volumes:
# Volúmenes para persistencia de datos multiusuario según especificaciones
- ./data:/app/data
- ./backup:/app/backup
- ./logs:/app/logs
# Scripts de backend (SOLO app/backend/script_groups/)
- ./app/backend/script_groups:/app/app/backend/script_groups
environment: environment:
# Variables de entorno según especificaciones SIDEL ScriptsManager # Database Configuration
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
# Application Configuration
- DEBUG=false - DEBUG=false
- SECRET_KEY=sidel-scriptsmanager-production-key-change-this - SECRET_KEY=sidel-scriptsmanager-production-key-change-this
- DATABASE_URL=sqlite:////tmp/scriptsmanager.db
- BASE_DATA_PATH=/app/data - BASE_DATA_PATH=/app/data
- BACKUP_ENABLED=true - BACKUP_ENABLED=true
# Port and Resource Configuration
- PORT_RANGE_START=5200 - PORT_RANGE_START=5200
- PORT_RANGE_END=5400 - PORT_RANGE_END=5400
- MAX_PROJECTS_PER_USER=50 - MAX_PROJECTS_PER_USER=50
# Variables multiusuario y multi-proyecto # Internationalization
- DEFAULT_LANGUAGE=en - DEFAULT_LANGUAGE=en
- SUPPORTED_LANGUAGES=en,es,it,fr - SUPPORTED_LANGUAGES=en,es,it,fr
- DEFAULT_THEME=light - DEFAULT_THEME=light
# Variables para conda - Entornos según especificaciones # Conda Environment Configuration
- CONDA_DEFAULT_ENV=scriptsmanager - CONDA_DEFAULT_ENV=scriptsmanager
- TSNET_ENV=tsnet - TSNET_ENV=tsnet
- PYTHONPATH=/app - PYTHONPATH=/app
# Variables específicas de SIDEL # SIDEL Branding
- SIDEL_LOGO_PATH=/app/app/static/images/SIDEL.png - SIDEL_LOGO_PATH=/app/app/static/images/SIDEL.png
- CORPORATE_BRANDING=true - CORPORATE_BRANDING=true
volumes:
# Data persistence volumes
- ./data:/app/data
- ./backup:/app/backup
- ./logs:/app/logs
# Backend scripts volume (only for production)
- ./app/backend/script_groups:/app/app/backend/script_groups
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped restart: unless-stopped
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5002/health"] test: ["CMD", "curl", "-f", "http://localhost:5002/health"]
@ -41,128 +68,123 @@ services:
timeout: 10s timeout: 10s
retries: 3 retries: 3
start_period: 60s start_period: 60s
profiles:
- production
# Servicio para desarrollo con hot-reload # Development Application Service with Hot Reload
scriptsmanager-dev: scriptsmanager-dev:
build: . build: .
container_name: sidel_scriptsmanager_dev container_name: scriptsmanager_dev
network_mode: host # Usar red host para desarrollo también network_mode: host
environment:
# Database Configuration (same as production for parity)
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
# Development Configuration
- DEBUG=true
- SECRET_KEY=sidel-dev-secret-key
- FLASK_ENV=development
- BASE_DATA_PATH=/app/data
- BACKUP_ENABLED=false
# Port and Resource Configuration (higher limits for development)
- PORT_RANGE_START=5200
- PORT_RANGE_END=5400
- MAX_PROJECTS_PER_USER=100
# Internationalization
- DEFAULT_LANGUAGE=en
- SUPPORTED_LANGUAGES=en,es,it,fr
- DEFAULT_THEME=light
# Conda Environment Configuration
- CONDA_DEFAULT_ENV=scriptsmanager
- TSNET_ENV=tsnet
- PYTHONPATH=/app
# SIDEL Branding
- SIDEL_LOGO_PATH=/app/app/static/images/SIDEL.png
- CORPORATE_BRANDING=true
volumes: volumes:
# Montar código completo para desarrollo # Hot reload: mount entire codebase
- .:/app - .:/app
- ./data:/app/data - ./data:/app/data
- ./backup:/app/backup - ./backup:/app/backup
- ./logs:/app/logs - ./logs:/app/logs
environment: depends_on:
- DEBUG=true postgres:
- SECRET_KEY=sidel-dev-secret-key condition: service_healthy
- DATABASE_URL=sqlite:///app/data/scriptsmanager_dev.db
- FLASK_ENV=development
- BASE_DATA_PATH=/app/data
- BACKUP_ENABLED=false
- PORT_RANGE_START=5200
- PORT_RANGE_END=5400
- MAX_PROJECTS_PER_USER=100
- DEFAULT_LANGUAGE=en
- SUPPORTED_LANGUAGES=en,es,it,fr
- DEFAULT_THEME=light
- CONDA_DEFAULT_ENV=scriptsmanager
- TSNET_ENV=tsnet
- PYTHONPATH=/app
- SIDEL_LOGO_PATH=/app/app/static/images/SIDEL.png
- CORPORATE_BRANDING=true
command: > command: >
bash -c "source activate scriptsmanager && bash -c "source activate scriptsmanager &&
echo '=== SIDEL ScriptsManager Development Environment ===' &&
echo 'Hot reload enabled - code changes will be reflected automatically' &&
echo 'Application will be available at: http://localhost:5003' &&
echo 'Debug port available at: 5678' &&
python scripts/init_db.py && python scripts/init_db.py &&
python scripts/run_app.py" python scripts/run_app.py"
profiles: profiles:
- dev - dev
# Servicio para backup automático según especificaciones SIDEL # Backup Service
backup: backup:
build: . build: .
container_name: sidel_backup_service container_name: scriptsmanager_backup
network_mode: host
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
- BACKUP_ENABLED=true
- BACKUP_RETENTION_DAYS=30
- PYTHONPATH=/app
volumes: volumes:
- ./data:/app/data - ./data:/app/data
- ./backup:/app/backup - ./backup:/app/backup
- ./logs:/app/logs - ./logs:/app/logs
environment: depends_on:
- BACKUP_ENABLED=true postgres:
- BACKUP_RETENTION_DAYS=30 condition: service_healthy
- DATABASE_URL=sqlite:///app/data/scriptsmanager.db
- PYTHONPATH=/app
command: > command: >
bash -c "source activate scriptsmanager && bash -c "source activate scriptsmanager &&
echo '=== Starting SIDEL ScriptsManager Backup Service ===' &&
while true; do while true; do
echo '=== Starting daily backup ===' echo '[BACKUP] Starting daily backup process...'
python -c 'from app.services.backup_service import BackupService; BackupService().create_backup()' python -c 'from app.services.backup_service import BackupService; BackupService().create_backup()'
echo '=== Backup completed ===' echo '[BACKUP] Backup completed successfully'
sleep 86400 # Backup diario (24 horas) sleep 86400 # Daily backup (24 hours)
done" done"
profiles: profiles:
- backup - backup
# Servicio de monitoreo de logs (opcional) # Log Monitor Service
log-monitor: log-monitor:
build: . build: .
container_name: sidel_log_monitor container_name: scriptsmanager_monitor
network_mode: host
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager
- PYTHONPATH=/app
volumes: volumes:
- ./logs:/app/logs - ./logs:/app/logs
- ./data:/app/data - ./data:/app/data
environment: depends_on:
- DATABASE_URL=sqlite:///app/data/scriptsmanager.db postgres:
- PYTHONPATH=/app condition: service_healthy
command: > command: >
bash -c "source activate scriptsmanager && bash -c "source activate scriptsmanager &&
echo '=== Starting SIDEL ScriptsManager Log Monitor ===' &&
python -c ' python -c '
import time import time
from app.services.log_service import LogService from app.services.data_manager import DataManager
print(\"Starting log monitor service...\") print(\"Log monitor service started - cleanup every hour\")
while True: while True:
try: try:
# Cleanup de logs antiguos según políticas de retención # Log cleanup according to retention policies
LogService().cleanup_old_logs() # TODO: Implement log cleanup service
time.sleep(3600) # Cleanup cada hora print(f\"[MONITOR] Log cleanup check at {time.strftime(\"%Y-%m-%d %H:%M:%S\")}\")
time.sleep(3600) # Cleanup every hour
except Exception as e: except Exception as e:
print(f\"Log monitor error: {e}\") print(f\"[MONITOR] Error: {e}\")
time.sleep(60) time.sleep(60)
'" '"
profiles: profiles:
- monitoring - monitoring
# Base de datos separada (opcional, si quieres usar PostgreSQL en lugar de SQLite) # Named volumes for data persistence
postgres:
image: postgres:15
container_name: sidel_postgres
environment:
- POSTGRES_DB=scriptsmanager
- POSTGRES_USER=scriptsmanager
- POSTGRES_PASSWORD=scriptsmanager_password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
profiles:
- postgres
# Servicio para backup automático
backup:
build: .
container_name: sidel_backup
volumes:
- ./data:/app/data
- ./backup:/app/backup
environment:
- BACKUP_ENABLED=true
- PYTHONPATH=/app
command: >
bash -c "source activate scriptsmanager &&
while true; do
python -c 'from app.services.backup_service import BackupService; BackupService().create_backup()'
sleep 86400 # Backup diario
done"
profiles:
- backup
volumes: volumes:
postgres_data: postgres_data:
driver: local
name: scriptsmanager_postgres_data

89
docker-entrypoint-debug.sh Executable file
View File

@ -0,0 +1,89 @@
#!/bin/bash
set -e
echo "=== SIDEL ScriptsManager Docker Entrypoint ==="
echo "Working directory: $(pwd)"
echo "User: $(whoami)"
echo "Database URL: ${DATABASE_URL:-'Not set'}"
# Activar conda environment
source /opt/conda/etc/profile.d/conda.sh
conda activate scriptsmanager
cd /app
echo "Available conda environments:"
conda env list
echo "Python path:"
which python
echo "Testing python modules:"
python -c "import flask; print(f'Flask: {flask.__version__}')"
python -c "import sqlalchemy; print(f'SQLAlchemy: {sqlalchemy.__version__}')"
# Verificar psycopg2 específicamente
echo "Testing psycopg2 module:"
python -c "
try:
import psycopg2
print(f'psycopg2 version: {psycopg2.__version__}')
print('psycopg2 imported successfully')
except ImportError as e:
print(f'psycopg2 import error: {e}')
print('Available packages in scriptsmanager environment:')
import subprocess
result = subprocess.run(['/opt/conda/bin/conda', 'list', '-n', 'scriptsmanager'],
capture_output=True, text=True)
print(result.stdout)
exit(1)
"
# Verificar que los directorios necesarios existen
echo "Checking data directories..."
ls -la data/ instance/ logs/ || echo "Creating missing directories..."
# Database setup based on DATABASE_URL
if [[ "${DATABASE_URL}" == postgresql* ]]; then
echo "=== PostgreSQL Database Setup ==="
# Extract database connection info
DB_HOST=$(echo $DATABASE_URL | sed -n 's/.*@\([^:]*\).*/\1/p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's/.*:\([0-9]*\)\/.*/\1/p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's/.*\/\([^?]*\).*/\1/p')
echo "Database Host: ${DB_HOST}"
echo "Database Port: ${DB_PORT}"
echo "Database Name: ${DB_NAME}"
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL to be ready..."
# Simple wait to ensure PostgreSQL has time to start
sleep 15
echo "PostgreSQL should be ready - proceeding..."
else
echo "=== SQLite Database Setup ==="
# Asegurar que la base de datos SQLite puede ser creada
echo "Setting up SQLite database permissions..."
touch data/scriptsmanager.db || echo "Database file already exists or created"
chmod 664 data/scriptsmanager.db || true
fi
# Inicializar base de datos
echo "Initializing database schema..."
python scripts/init_db.py
# Verificar entornos conda
echo "ScriptsManager environment packages:"
conda list -n scriptsmanager | grep -E "(flask|sqlalchemy|psycopg2)" || true
echo "TSNet environment packages:"
conda list -n tsnet | grep -E "(tsnet|numpy|matplotlib)" || true
# Database health check
echo "Performing database health check..."
echo "Skipping health check for now - starting application"
echo "=== SIDEL ScriptsManager Ready ==="
exec "$@"

72
docker-entrypoint-simple.sh Executable file
View File

@ -0,0 +1,72 @@
#!/bin/bash
set -e
echo "=== SIDEL ScriptsManager Docker Entrypoint ==="
echo "Working directory: $(pwd)"
echo "User: $(whoami)"
echo "Database URL: ${DATABASE_URL:-'Not set'}"
# Activar conda environment
source /opt/conda/etc/profile.d/conda.sh
conda activate scriptsmanager
cd /app
echo "Available conda environments:"
conda env list
echo "Python path:"
which python
echo "Testing python modules:"
python -c "import flask; print(f'Flask: {flask.__version__}')"
python -c "import sqlalchemy; print(f'SQLAlchemy: {sqlalchemy.__version__}')"
# Verificar que los directorios necesarios existen
echo "Checking data directories..."
ls -la data/ instance/ logs/ || echo "Creating missing directories..."
# Database setup based on DATABASE_URL
if [[ "${DATABASE_URL}" == postgresql* ]]; then
echo "=== PostgreSQL Database Setup ==="
# Extract database connection info
DB_HOST=$(echo $DATABASE_URL | sed -n 's/.*@\([^:]*\).*/\1/p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's/.*:\([0-9]*\)\/.*/\1/p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's/.*\/\([^?]*\).*/\1/p')
echo "Database Host: ${DB_HOST}"
echo "Database Port: ${DB_PORT}"
echo "Database Name: ${DB_NAME}"
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL to be ready..."
# Simple wait to ensure PostgreSQL has time to start
sleep 15
echo "PostgreSQL should be ready - proceeding..."
else
echo "=== SQLite Database Setup ==="
# Asegurar que la base de datos SQLite puede ser creada
echo "Setting up SQLite database permissions..."
touch data/scriptsmanager.db || echo "Database file already exists or created"
chmod 664 data/scriptsmanager.db || true
fi
# Inicializar base de datos
echo "Initializing database schema..."
python scripts/init_db.py
# Verificar entornos conda
echo "ScriptsManager environment packages:"
conda list -n scriptsmanager | grep -E "(flask|sqlalchemy|psycopg2)" || true
echo "TSNet environment packages:"
conda list -n tsnet | grep -E "(tsnet|numpy|matplotlib)" || true
# Database health check
echo "Performing database health check..."
echo "Skipping health check for now - starting application"
echo "=== SIDEL ScriptsManager Ready ==="
exec "$@"

71
docker-entrypoint.sh Executable file
View File

@ -0,0 +1,71 @@
#!/bin/bash
set -e
echo "=== SIDEL ScriptsManager Docker Entrypoint ==="
echo "Working directory: $(pwd)"
echo "User: $(whoami)"
echo "Database URL: ${DATABASE_URL:-'Not set'}"
# Activar conda environment
source /opt/conda/etc/profile.d/conda.sh
conda activate scriptsmanager
cd /app
echo "Available conda environments:"
conda env list
echo "Checking directory structure..."
if [ ! -d "app/backend/script_groups" ]; then
echo "ERROR: app/backend/script_groups directory not found!"
exit 1
fi
# Verificar que los directorios necesarios existen
echo "Checking data directories..."
ls -la data/ instance/ logs/ || echo "Creating missing directories..."
# Database setup based on DATABASE_URL
if [[ "${DATABASE_URL}" == postgresql* ]]; then
echo "=== PostgreSQL Database Setup ==="
# Extract database connection info
DB_HOST=$(echo $DATABASE_URL | sed -n 's/.*@\([^:]*\).*/\1/p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's/.*:\([0-9]*\)\/.*/\1/p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's/.*\/\([^?]*\).*/\1/p')
echo "Database Host: ${DB_HOST}"
echo "Database Port: ${DB_PORT}"
echo "Database Name: ${DB_NAME}"
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL to be ready..."
# Simple wait to ensure PostgreSQL has time to start
sleep 10
echo "PostgreSQL should be ready - proceeding..."
else
echo "=== SQLite Database Setup ==="
# Asegurar que la base de datos SQLite puede ser creada
echo "Setting up SQLite database permissions..."
touch data/scriptsmanager.db || echo "Database file already exists or created"
chmod 664 data/scriptsmanager.db || true
fi
# Inicializar base de datos
echo "Initializing database schema..."
python scripts/init_db.py
# Verificar entornos conda
echo "ScriptsManager environment packages:"
conda list -n scriptsmanager | grep -E "(flask|sqlalchemy|psycopg2)" || true
echo "TSNet environment packages:"
conda list -n tsnet | grep -E "(tsnet|numpy|matplotlib)" || true
# Database health check
echo "Performing database health check..."
echo "Skipping health check for now - starting application"
echo "=== SIDEL ScriptsManager Ready ==="
exec "$@"

View File

@ -72,7 +72,7 @@ rebuild_quick() {
echo -e "${BLUE}⚡ Rebuild rápido de SIDEL ScriptsManager...${NC}" echo -e "${BLUE}⚡ Rebuild rápido de SIDEL ScriptsManager...${NC}"
echo -e "${YELLOW}Paso 1/3: Deteniendo contenedor...${NC}" echo -e "${YELLOW}Paso 1/3: Deteniendo contenedor...${NC}"
stop_services stop_containers
echo -e "${YELLOW}Paso 2/3: Reconstruyendo imagen...${NC}" echo -e "${YELLOW}Paso 2/3: Reconstruyendo imagen...${NC}"
build_image build_image
@ -113,9 +113,15 @@ show_help() {
echo -e " ${GREEN}verify${NC} Verificar configuración y entornos" echo -e " ${GREEN}verify${NC} Verificar configuración y entornos"
echo -e " ${GREEN}ports${NC} Mostrar puertos en uso" echo -e " ${GREEN}ports${NC} Mostrar puertos en uso"
echo -e " ${GREEN}users${NC} Gestionar usuarios (requiere shell activo)" echo -e " ${GREEN}users${NC} Gestionar usuarios (requiere shell activo)"
echo -e " ${GREEN}db-status${NC} Verificar estado de la base de datos"
echo -e " ${GREEN}db-migrate${NC} Migrar desde SQLite a PostgreSQL"
echo -e " ${GREEN}db-backup${NC} Crear backup de la base de datos"
echo -e " ${GREEN}start-postgres${NC} Iniciar solo el servicio PostgreSQL"
echo -e " ${GREEN}stop-postgres${NC} Detener solo el servicio PostgreSQL"
echo "" echo ""
echo "Servicios opcionales (perfiles):" echo "Servicios opcionales (perfiles):"
echo -e " ${YELLOW}--profile dev${NC} Modo desarrollo" echo -e " ${YELLOW}--profile dev${NC} Modo desarrollo"
echo -e " ${YELLOW}--profile production${NC} Modo producción"
echo -e " ${YELLOW}--profile backup${NC} Backup automático" echo -e " ${YELLOW}--profile backup${NC} Backup automático"
echo -e " ${YELLOW}--profile monitoring${NC} Monitoreo de logs" echo -e " ${YELLOW}--profile monitoring${NC} Monitoreo de logs"
echo "" echo ""
@ -210,10 +216,22 @@ start_production() {
echo -e "${YELLOW}¡IMPORTANTE: Edita el archivo .env con tus configuraciones de producción!${NC}" echo -e "${YELLOW}¡IMPORTANTE: Edita el archivo .env con tus configuraciones de producción!${NC}"
fi fi
$DOCKER_COMPOSE up -d scriptsmanager # Iniciar PostgreSQL primero
echo -e "${BLUE}🐘 Iniciando PostgreSQL...${NC}"
$DOCKER_COMPOSE up -d postgres
# Esperar que PostgreSQL esté listo
echo -e "${BLUE}⏳ Esperando que PostgreSQL esté listo...${NC}"
sleep 10
# Iniciar aplicación
echo -e "${BLUE}🚀 Iniciando aplicación...${NC}"
$DOCKER_COMPOSE --profile production up -d scriptsmanager
echo -e "${GREEN}✅ SIDEL ScriptsManager iniciado en http://localhost:${SIDEL_APP_PORT}${NC}" echo -e "${GREEN}✅ SIDEL ScriptsManager iniciado en http://localhost:${SIDEL_APP_PORT}${NC}"
echo -e "${BLUE}📊 Dashboard multiusuario disponible${NC}" echo -e "${BLUE}📊 Dashboard multiusuario disponible${NC}"
echo -e "${BLUE}🔧 Scripts TSNet en puertos ${SIDEL_SCRIPT_PORT_RANGE}${NC}" echo -e "${BLUE}🔧 Scripts TSNet en puertos ${SIDEL_SCRIPT_PORT_RANGE}${NC}"
echo -e "${BLUE}🐘 PostgreSQL en puerto 5432${NC}"
} }
# Función para iniciar en desarrollo # Función para iniciar en desarrollo
@ -230,9 +248,21 @@ start_development() {
cp .env.example .env cp .env.example .env
fi fi
# Iniciar PostgreSQL primero
echo -e "${BLUE}🐘 Iniciando PostgreSQL...${NC}"
$DOCKER_COMPOSE up -d postgres
# Esperar que PostgreSQL esté listo
echo -e "${BLUE}⏳ Esperando que PostgreSQL esté listo...${NC}"
sleep 10
# Iniciar aplicación en modo desarrollo
echo -e "${BLUE}🚀 Iniciando aplicación en modo desarrollo...${NC}"
$DOCKER_COMPOSE --profile dev up -d scriptsmanager-dev $DOCKER_COMPOSE --profile dev up -d scriptsmanager-dev
echo -e "${GREEN}✅ SIDEL ScriptsManager (desarrollo) iniciado en http://localhost:${SIDEL_DEV_PORT}${NC}" echo -e "${GREEN}✅ SIDEL ScriptsManager (desarrollo) iniciado en http://localhost:${SIDEL_DEV_PORT}${NC}"
echo -e "${BLUE}🔄 Hot-reload activado para desarrollo${NC}" echo -e "${BLUE}🔄 Hot-reload activado para desarrollo${NC}"
echo -e "${BLUE}🐘 PostgreSQL en puerto 5432${NC}"
} }
# Función para iniciar backup automático # Función para iniciar backup automático
@ -474,6 +504,158 @@ manage_users() {
$DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && bash" $DOCKER_COMPOSE exec scriptsmanager bash -c "source activate scriptsmanager && bash"
} }
# Función para verificar estado de la base de datos
check_database_status() {
show_banner
echo -e "${BLUE}Verificando estado de la base de datos...${NC}"
check_docker_compose
# Verificar si PostgreSQL está ejecutándose
if docker ps | grep -q "scriptsmanager_postgres"; then
echo -e "${GREEN}✅ PostgreSQL container is running${NC}"
# Verificar conectividad
if $DOCKER_COMPOSE exec postgres pg_isready -U scriptsmanager -d scriptsmanager > /dev/null 2>&1; then
echo -e "${GREEN}✅ PostgreSQL is accepting connections${NC}"
# Obtener información de la base de datos
echo -e "${BLUE}📊 Database Information:${NC}"
$DOCKER_COMPOSE exec postgres psql -U scriptsmanager -d scriptsmanager -c "
SELECT
'PostgreSQL Version' as info,
version() as value
UNION ALL
SELECT
'Database Size' as info,
pg_size_pretty(pg_database_size('scriptsmanager')) as value
UNION ALL
SELECT
'Active Connections' as info,
count(*)::text as value
FROM pg_stat_activity
WHERE datname = 'scriptsmanager';" 2>/dev/null || echo "Could not retrieve database info"
else
echo -e "${RED}❌ PostgreSQL is not accepting connections${NC}"
fi
else
echo -e "${YELLOW}⚠️ PostgreSQL container is not running${NC}"
echo -e "${BLUE}💡 Start with: $0 start-postgres${NC}"
fi
# Verificar si la aplicación puede conectarse
if docker ps | grep -q $SIDEL_CONTAINER_NAME; then
echo -e "${BLUE}🔗 Testing application database connection...${NC}"
$DOCKER_COMPOSE exec scriptsmanager bash -c "
source activate scriptsmanager &&
python -c 'from app.config.database import check_db_health; import json; print(json.dumps(check_db_health(), indent=2))'
" 2>/dev/null || echo -e "${YELLOW}⚠️ Application container not available${NC}"
fi
}
# Función para migrar de SQLite a PostgreSQL
migrate_to_postgresql() {
show_banner
echo -e "${BLUE}Migrando desde SQLite a PostgreSQL...${NC}"
check_docker_compose
# Verificar que PostgreSQL esté ejecutándose
if ! docker ps | grep -q "scriptsmanager_postgres"; then
echo -e "${YELLOW}Iniciando PostgreSQL...${NC}"
$DOCKER_COMPOSE up -d postgres
sleep 10
fi
# Verificar si existe la base de datos SQLite
if [ ! -f "data/scriptsmanager.db" ]; then
echo -e "${RED}❌ No se encontró la base de datos SQLite en data/scriptsmanager.db${NC}"
echo -e "${BLUE}💡 ¿Quizás ya se migró o no tienes datos existentes?${NC}"
return 1
fi
echo -e "${BLUE}🔄 Ejecutando migración...${NC}"
echo -e "${YELLOW}Esto puede tomar varios minutos dependiendo del tamaño de los datos${NC}"
# Ejecutar el script de migración
python migrate_sqlite_to_postgresql.py \
--source "data/scriptsmanager.db" \
--target "${DATABASE_URL:-postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager}" \
--backup
if [ $? -eq 0 ]; then
echo -e "${GREEN}✅ Migración completada exitosamente${NC}"
echo -e "${BLUE}💡 La base de datos SQLite original se mantiene como backup${NC}"
echo -e "${BLUE}💡 Ahora puedes usar PostgreSQL configurando DATABASE_URL en docker-compose.yml${NC}"
else
echo -e "${RED}❌ La migración falló${NC}"
return 1
fi
}
# Función para backup de base de datos
backup_database() {
show_banner
echo -e "${BLUE}Creando backup de la base de datos...${NC}"
check_docker_compose
if docker ps | grep -q "scriptsmanager_postgres"; then
# Backup PostgreSQL
echo -e "${BLUE}📦 Creando backup de PostgreSQL...${NC}"
timestamp=$(date +"%Y%m%d_%H%M%S")
backup_file="backup/postgres_backup_${timestamp}.sql"
mkdir -p backup
$DOCKER_COMPOSE exec postgres pg_dump -U scriptsmanager -d scriptsmanager > "$backup_file"
if [ $? -eq 0 ]; then
echo -e "${GREEN}✅ Backup PostgreSQL creado: ${backup_file}${NC}"
else
echo -e "${RED}❌ Error creando backup PostgreSQL${NC}"
fi
elif [ -f "data/scriptsmanager.db" ]; then
# Backup SQLite
echo -e "${BLUE}📦 Creando backup de SQLite...${NC}"
timestamp=$(date +"%Y%m%d_%H%M%S")
backup_file="backup/sqlite_backup_${timestamp}.db"
mkdir -p backup
cp "data/scriptsmanager.db" "$backup_file"
echo -e "${GREEN}✅ Backup SQLite creado: ${backup_file}${NC}"
else
echo -e "${RED}❌ No se encontró ninguna base de datos para hacer backup${NC}"
fi
}
# Función para iniciar solo PostgreSQL
start_postgres() {
show_banner
echo -e "${BLUE}Iniciando servicio PostgreSQL...${NC}"
check_docker_compose
$DOCKER_COMPOSE up -d postgres
echo -e "${BLUE}⏳ Esperando que PostgreSQL esté listo...${NC}"
sleep 5
if $DOCKER_COMPOSE exec postgres pg_isready -U scriptsmanager -d scriptsmanager > /dev/null 2>&1; then
echo -e "${GREEN}✅ PostgreSQL iniciado y listo en puerto 5432${NC}"
echo -e "${BLUE}🔗 Connection: postgresql://scriptsmanager:scriptsmanager_dev_password@localhost:5432/scriptsmanager${NC}"
else
echo -e "${YELLOW}⚠️ PostgreSQL iniciado pero aún no está listo${NC}"
echo -e "${BLUE}💡 Usa: $0 db-status para verificar${NC}"
fi
}
# Función para detener solo PostgreSQL
stop_postgres() {
echo -e "${BLUE}Deteniendo servicio PostgreSQL...${NC}"
check_docker_compose
$DOCKER_COMPOSE stop postgres
echo -e "${GREEN}✅ PostgreSQL detenido${NC}"
}
# Script principal # Script principal
case "${1:-help}" in case "${1:-help}" in
build) build)
@ -545,6 +727,21 @@ case "${1:-help}" in
users) users)
manage_users manage_users
;; ;;
db-status)
check_database_status
;;
db-migrate)
migrate_to_postgresql
;;
db-backup)
backup_database
;;
start-postgres)
start_postgres
;;
stop-postgres)
stop_postgres
;;
help|--help|-h) help|--help|-h)
show_help show_help
;; ;;

View File

@ -0,0 +1,377 @@
#!/usr/bin/env python3
"""
SIDEL ScriptsManager - SQLite to PostgreSQL Migration Script
This script migrates data from SQLite to PostgreSQL while maintaining
referential integrity and data consistency.
Usage:
python migrate_sqlite_to_postgresql.py [--source SOURCE_DB] [--target TARGET_URL] [--dry-run]
Arguments:
--source: SQLite database file path (default: data/scriptsmanager.db)
--target: PostgreSQL connection URL (default: from DATABASE_URL env var)
--dry-run: Perform a dry run without making changes
--backup: Create backup before migration
"""
import argparse
import os
import sys
import json
import shutil
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any, Optional
# Add the app directory to Python path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
import sqlite3
from sqlalchemy import create_engine, MetaData, Table, select, insert
from sqlalchemy.orm import sessionmaker
from sqlalchemy.exc import SQLAlchemyError
# Import application modules
from app.config.config import Config
from app.config.database import db
class DatabaseMigrator:
"""Handles migration from SQLite to PostgreSQL."""
def __init__(self, sqlite_path: str, postgresql_url: str, dry_run: bool = False):
self.sqlite_path = sqlite_path
self.postgresql_url = postgresql_url
self.dry_run = dry_run
# Database connections
self.sqlite_engine = None
self.postgres_engine = None
self.sqlite_metadata = None
self.postgres_metadata = None
# Migration statistics
self.stats = {
'tables_migrated': 0,
'total_records': 0,
'start_time': None,
'end_time': None,
'errors': []
}
def connect_databases(self):
"""Establish connections to both databases."""
try:
# Connect to SQLite
print(f"Connecting to SQLite database: {self.sqlite_path}")
self.sqlite_engine = create_engine(f"sqlite:///{self.sqlite_path}")
self.sqlite_metadata = MetaData()
self.sqlite_metadata.reflect(bind=self.sqlite_engine)
# Connect to PostgreSQL
print(f"Connecting to PostgreSQL database...")
self.postgres_engine = create_engine(self.postgresql_url)
self.postgres_metadata = MetaData()
# Test connections
with self.sqlite_engine.connect() as conn:
result = conn.execute("SELECT name FROM sqlite_master WHERE type='table'")
sqlite_tables = [row[0] for row in result.fetchall()]
print(f"Found {len(sqlite_tables)} tables in SQLite: {sqlite_tables}")
with self.postgres_engine.connect() as conn:
result = conn.execute("SELECT version()")
pg_version = result.fetchone()[0]
print(f"PostgreSQL version: {pg_version.split()[1]}")
return True
except Exception as e:
print(f"Error connecting to databases: {e}")
self.stats['errors'].append(f"Connection error: {e}")
return False
def create_backup(self):
"""Create backup of SQLite database before migration."""
try:
backup_dir = Path("backup") / datetime.now().strftime("%Y-%m-%d")
backup_dir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = backup_dir / f"sqlite_backup_{timestamp}.db"
print(f"Creating backup: {backup_path}")
shutil.copy2(self.sqlite_path, backup_path)
return str(backup_path)
except Exception as e:
print(f"Error creating backup: {e}")
return None
def get_table_dependencies(self) -> List[str]:
"""Get tables in dependency order for migration."""
# Define table migration order based on foreign key dependencies
# This should be updated based on your actual schema
dependency_order = [
'users', # Independent table
'scripts', # Depends on users
'execution_logs', # Depends on scripts
'script_tags', # Depends on scripts
'user_preferences', # Depends on users
'backup_logs', # Independent
'system_settings', # Independent
]
# Get actual tables from SQLite
available_tables = list(self.sqlite_metadata.tables.keys())
# Return only tables that exist, in dependency order
ordered_tables = []
for table in dependency_order:
if table in available_tables:
ordered_tables.append(table)
# Add any remaining tables not in dependency list
for table in available_tables:
if table not in ordered_tables and not table.startswith('sqlite_'):
ordered_tables.append(table)
return ordered_tables
def migrate_table_data(self, table_name: str) -> Dict[str, Any]:
"""Migrate data from a specific table."""
print(f"\\nMigrating table: {table_name}")
try:
# Get table schema from SQLite
sqlite_table = self.sqlite_metadata.tables[table_name]
# Reflect PostgreSQL schema (should already be created by SQLAlchemy)
self.postgres_metadata.reflect(bind=self.postgres_engine)
if table_name not in self.postgres_metadata.tables:
print(f"Warning: Table {table_name} does not exist in PostgreSQL, skipping...")
return {'status': 'skipped', 'reason': 'table_not_found', 'records': 0}
postgres_table = self.postgres_metadata.tables[table_name]
# Read data from SQLite
with self.sqlite_engine.connect() as sqlite_conn:
result = sqlite_conn.execute(select(sqlite_table))
rows = result.fetchall()
columns = result.keys()
if not rows:
print(f"Table {table_name} is empty, skipping...")
return {'status': 'empty', 'records': 0}
print(f"Found {len(rows)} records in {table_name}")
if self.dry_run:
print(f"DRY RUN: Would migrate {len(rows)} records to {table_name}")
return {'status': 'dry_run', 'records': len(rows)}
# Prepare data for PostgreSQL
data_to_insert = []
for row in rows:
row_dict = dict(zip(columns, row))
# Handle data type conversions if needed
converted_row = self.convert_row_data(table_name, row_dict)
data_to_insert.append(converted_row)
# Insert data into PostgreSQL
with self.postgres_engine.connect() as postgres_conn:
# Clear existing data (if any)
postgres_conn.execute(postgres_table.delete())
# Insert new data
if data_to_insert:
postgres_conn.execute(postgres_table.insert(), data_to_insert)
postgres_conn.commit()
print(f"Successfully migrated {len(data_to_insert)} records to {table_name}")
return {'status': 'success', 'records': len(data_to_insert)}
except Exception as e:
print(f"Error migrating table {table_name}: {e}")
self.stats['errors'].append(f"Table {table_name}: {e}")
return {'status': 'error', 'error': str(e), 'records': 0}
def convert_row_data(self, table_name: str, row_data: Dict[str, Any]) -> Dict[str, Any]:
"""Convert SQLite data types to PostgreSQL compatible format."""
converted = {}
for column, value in row_data.items():
if value is None:
converted[column] = None
elif isinstance(value, str):
# Handle string data
converted[column] = value
elif isinstance(value, (int, float)):
# Handle numeric data
converted[column] = value
elif isinstance(value, bytes):
# Handle binary data
converted[column] = value
else:
# Convert other types to string
converted[column] = str(value)
return converted
def verify_migration(self) -> bool:
"""Verify that migration was successful by comparing record counts."""
print("\\nVerifying migration...")
verification_passed = True
for table_name in self.get_table_dependencies():
try:
# Count records in SQLite
with self.sqlite_engine.connect() as sqlite_conn:
sqlite_table = self.sqlite_metadata.tables[table_name]
result = sqlite_conn.execute(f"SELECT COUNT(*) FROM {table_name}")
sqlite_count = result.scalar()
# Count records in PostgreSQL
with self.postgres_engine.connect() as postgres_conn:
result = postgres_conn.execute(f"SELECT COUNT(*) FROM {table_name}")
postgres_count = result.scalar()
print(f"{table_name}: SQLite={sqlite_count}, PostgreSQL={postgres_count}")
if sqlite_count != postgres_count:
print(f"❌ Record count mismatch in {table_name}")
verification_passed = False
else:
print(f"{table_name} verified successfully")
except Exception as e:
print(f"❌ Error verifying {table_name}: {e}")
verification_passed = False
return verification_passed
def run_migration(self, create_backup: bool = True) -> bool:
"""Run the complete migration process."""
print("=== SIDEL ScriptsManager: SQLite to PostgreSQL Migration ===")
self.stats['start_time'] = datetime.now()
try:
# Create backup if requested
if create_backup and not self.dry_run:
backup_path = self.create_backup()
if backup_path:
print(f"Backup created: {backup_path}")
else:
print("Warning: Could not create backup")
# Connect to databases
if not self.connect_databases():
return False
# Get migration order
tables_to_migrate = self.get_table_dependencies()
print(f"\\nTables to migrate: {tables_to_migrate}")
# Migrate each table
for table_name in tables_to_migrate:
result = self.migrate_table_data(table_name)
if result['status'] == 'success':
self.stats['tables_migrated'] += 1
self.stats['total_records'] += result['records']
# Verify migration (skip for dry run)
if not self.dry_run:
verification_passed = self.verify_migration()
if not verification_passed:
print("\\n❌ Migration verification failed!")
return False
self.stats['end_time'] = datetime.now()
duration = self.stats['end_time'] - self.stats['start_time']
print(f"\\n=== Migration Summary ===")
print(f"Duration: {duration}")
print(f"Tables migrated: {self.stats['tables_migrated']}")
print(f"Total records: {self.stats['total_records']}")
print(f"Errors: {len(self.stats['errors'])}")
if self.stats['errors']:
print("\\nErrors encountered:")
for error in self.stats['errors']:
print(f" - {error}")
if self.dry_run:
print("\\n✅ DRY RUN completed successfully")
else:
print("\\n✅ Migration completed successfully")
return True
except Exception as e:
print(f"\\n❌ Migration failed: {e}")
return False
finally:
# Close connections
if self.sqlite_engine:
self.sqlite_engine.dispose()
if self.postgres_engine:
self.postgres_engine.dispose()
def main():
"""Main migration script entry point."""
parser = argparse.ArgumentParser(description="Migrate SIDEL ScriptsManager from SQLite to PostgreSQL")
parser.add_argument(
'--source',
default='data/scriptsmanager.db',
help='SQLite database file path (default: data/scriptsmanager.db)'
)
parser.add_argument(
'--target',
default=os.getenv('DATABASE_URL'),
help='PostgreSQL connection URL (default: from DATABASE_URL env var)'
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Perform a dry run without making changes'
)
parser.add_argument(
'--no-backup',
action='store_true',
help='Skip creating backup before migration'
)
args = parser.parse_args()
# Validate arguments
if not args.target:
print("Error: PostgreSQL target URL must be specified via --target or DATABASE_URL environment variable")
sys.exit(1)
if not args.target.startswith('postgresql://'):
print("Error: Target URL must be a PostgreSQL connection string")
sys.exit(1)
if not Path(args.source).exists():
print(f"Error: SQLite database file not found: {args.source}")
sys.exit(1)
# Run migration
migrator = DatabaseMigrator(args.source, args.target, args.dry_run)
success = migrator.run_migration(create_backup=not args.no_backup)
sys.exit(0 if success else 1)
if __name__ == '__main__':
main()

View File

@ -12,6 +12,7 @@ eventlet>=0.30.0
# Database # Database
# sqlite3 is built-in with Python 3.12+ # sqlite3 is built-in with Python 3.12+
psycopg2-binary>=2.9.0 # PostgreSQL adapter
# Conda Environment Management # Conda Environment Management
psutil>=5.9.0 psutil>=5.9.0

3
scripts/run_app.py Normal file → Executable file
View File

@ -26,7 +26,8 @@ def run_app():
if __name__ == "__main__": if __name__ == "__main__":
print("Starting ScriptsManager...") print("Starting ScriptsManager...")
print("Application will be available at: http://localhost:5002") print("🚀 Application will be available at: http://localhost:5002")
print("🔄 HOT-RELOAD ENABLED - Modify files and they will update automatically!")
print("Press Ctrl+C to stop the server") print("Press Ctrl+C to stop the server")
print("-" * 50) print("-" * 50)
run_app() run_app()

25
sql/01_init_database.sql Normal file
View File

@ -0,0 +1,25 @@
-- SIDEL ScriptsManager PostgreSQL Database Initialization
-- This script creates the initial database structure for PostgreSQL
-- Enable necessary extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- For text search capabilities
-- Create application schema (optional, can use public schema)
-- CREATE SCHEMA IF NOT EXISTS scriptsmanager;
-- SET search_path TO scriptsmanager, public;
-- Database configuration for better performance
ALTER DATABASE scriptsmanager SET timezone TO 'UTC';
ALTER DATABASE scriptsmanager SET log_statement TO 'all';
ALTER DATABASE scriptsmanager SET log_min_duration_statement TO 1000; -- Log slow queries
-- Grant necessary permissions to the application user
GRANT CONNECT ON DATABASE scriptsmanager TO scriptsmanager;
GRANT USAGE ON SCHEMA public TO scriptsmanager;
GRANT CREATE ON SCHEMA public TO scriptsmanager;
-- Create sequences for auto-incrementing IDs (SQLAlchemy will handle this automatically, but good to have)
-- Note: SQLAlchemy will create these automatically when creating tables
COMMENT ON DATABASE scriptsmanager IS 'SIDEL ScriptsManager Application Database';

27
sql/02_indexes.sql Normal file
View File

@ -0,0 +1,27 @@
-- SIDEL ScriptsManager PostgreSQL Indexes and Performance Optimization
-- This script creates indexes and performance optimizations
-- Performance settings for the database
SET shared_preload_libraries = 'pg_stat_statements';
SET log_statement = 'ddl';
SET log_checkpoints = on;
SET log_connections = on;
SET log_disconnections = on;
-- Note: The following indexes will be created by SQLAlchemy when tables are created
-- This file serves as documentation and can be used for manual optimization
-- Indexes for common queries (will be created automatically by SQLAlchemy)
-- CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
-- CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
-- CREATE INDEX IF NOT EXISTS idx_scripts_user_id ON scripts(user_id);
-- CREATE INDEX IF NOT EXISTS idx_scripts_created_at ON scripts(created_at);
-- CREATE INDEX IF NOT EXISTS idx_execution_logs_script_id ON execution_logs(script_id);
-- CREATE INDEX IF NOT EXISTS idx_execution_logs_timestamp ON execution_logs(timestamp);
-- Full-text search indexes (if needed)
-- CREATE INDEX IF NOT EXISTS idx_scripts_name_search ON scripts USING gin(to_tsvector('english', name));
-- CREATE INDEX IF NOT EXISTS idx_scripts_description_search ON scripts USING gin(to_tsvector('english', description));
-- Comment for documentation
COMMENT ON SCHEMA public IS 'SIDEL ScriptsManager main schema with performance optimizations';

64
sql/03_default_data.sql Normal file
View File

@ -0,0 +1,64 @@
-- SIDEL ScriptsManager PostgreSQL Default Data
-- This script inserts default data for the application
-- Default admin user (password should be changed after first login)
-- Note: This will be handled by the application's init_db.py script
-- The password hash below corresponds to 'admin123' - CHANGE IN PRODUCTION
-- Default application settings
-- INSERT INTO settings (key, value, description) VALUES
-- ('app_version', '1.0.0', 'Application version'),
-- ('backup_enabled', 'true', 'Enable automatic backups'),
-- ('max_projects_per_user', '50', 'Maximum projects per user'),
-- ('default_theme', 'light', 'Default UI theme'),
-- ('default_language', 'en', 'Default interface language')
-- ON CONFLICT (key) DO NOTHING;
-- Default script categories/tags
-- INSERT INTO script_categories (name, description) VALUES
-- ('analysis', 'Data analysis scripts'),
-- ('automation', 'Process automation scripts'),
-- ('reporting', 'Report generation scripts'),
-- ('maintenance', 'System maintenance scripts'),
-- ('development', 'Development and testing scripts')
-- ON CONFLICT (name) DO NOTHING;
-- PostgreSQL-specific maintenance tasks
-- Create a function to clean up old execution logs
CREATE OR REPLACE FUNCTION cleanup_old_execution_logs(retention_days INTEGER DEFAULT 30)
RETURNS INTEGER AS $$
DECLARE
deleted_count INTEGER;
BEGIN
DELETE FROM execution_logs
WHERE timestamp < (CURRENT_DATE - INTERVAL '1 day' * retention_days);
GET DIAGNOSTICS deleted_count = ROW_COUNT;
RETURN deleted_count;
END;
$$ LANGUAGE plpgsql;
-- Comment the function
COMMENT ON FUNCTION cleanup_old_execution_logs(INTEGER) IS 'Cleans up execution logs older than specified days';
-- Create a function to get database statistics
CREATE OR REPLACE FUNCTION get_database_stats()
RETURNS TABLE (
table_name TEXT,
row_count BIGINT,
table_size TEXT
) AS $$
BEGIN
RETURN QUERY
SELECT
schemaname||'.'||tablename as table_name,
n_tup_ins - n_tup_del as row_count,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as table_size
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
END;
$$ LANGUAGE plpgsql;
-- Comment the function
COMMENT ON FUNCTION get_database_stats() IS 'Returns statistics about database tables and sizes';