86 KiB
SIDEL ScriptsManager - Multi-Language Script Manager Application Specification
Overview
A Python Flask web application for Linux environments that automatically discovers and executes scripts from a backend directory structure. The SIDEL ScriptsManager provides user management, multi-language support, dark/light mode themes, SIDEL corporate branding, and advanced script metadata management with automatic header parsing and web interface lifecycle management.
Basic Concepts
Script Architecture
Each script in SIDEL ScriptsManager follows a standardized proxy-based architecture pattern:
- Proxy Integration: Scripts integrate directly with the main ScriptsManager Flask application through a proxy system
- No External Ports: Scripts do not run independent Flask servers or require external port management
- Automatic Context Injection: Scripts receive essential context automatically through global variables:
- WORKSPACE_PATH: Path to persistent data storage for the script
- PARAMETERS: Script parameters passed from ScriptsManager interface
- ENVIRONMENT: Dictionary containing user and project context:
USER_ID
: Current user's unique identifierUSER_LEVEL
: Current user's permission level (admin
,developer
,operator
,viewer
)PROJECT_ID
: Current project identifier for data isolationPROJECT_NAME
: Human-readable project name for display purposesUSER_THEME
: Current user's selected theme (light
,dark
)USER_LANGUAGE
: Current user's selected language (en
,es
,it
,fr
)
- app: Pre-configured Flask application instance for route registration
- URL Routing: Scripts are accessible through proxy URLs:
http://localhost:5002/project/{project_id}/script/{script_id}/user/{user_id}/
- SIDEL Branding Integration: Scripts receive SIDEL corporate branding through the integrated Flask application
- Consistent User Experience: All script interfaces maintain SIDEL visual identity, theme, and language through the proxy system
Data Persistence Architecture
data/
├── script_groups/
│ ├── group1/
│ │ ├── user1/
│ │ │ ├── project_default/
│ │ │ │ ├── config.json
│ │ │ │ ├── settings.json
│ │ │ │ └── custom_data.json
│ │ │ └── project_alpha/
│ │ │ ├── config.json
│ │ │ └── results.json
│ │ └── user2/
│ │ └── project_default/
│ │ └── config.json
│ └── group2/
│ └── user1/
│ └── project_beta/
│ └── workflow.json
├── logs/
│ ├── executions/
│ │ ├── user1/
│ │ │ ├── 2025-09-12/
│ │ │ │ ├── execution_abc123.log
│ │ │ │ ├── execution_def456.log
│ │ │ │ └── execution_ghi789.log
│ │ │ └── 2025-09-11/
│ │ │ └── execution_jkl012.log
│ │ └── user2/
│ │ └── 2025-09-12/
│ │ └── execution_mno345.log
│ ├── system/
│ │ ├── application.log
│ │ ├── error.log
│ │ └── access.log
│ └── audit/
│ ├── user_actions.log
│ └── admin_actions.log
Multi-User & Multi-Project Management
- User Isolation: Each user maintains separate configuration and data files
- Project Segregation: Users can work on multiple projects with independent settings
- Default Project: Every user automatically gets a
project_default
for immediate use - Session Isolation: Proxy system ensures user sessions don't interfere with each other
Script Lifecycle
- Initialization: ScriptsManager discovers and registers the script in the proxy system
- Execution Request: User requests script execution through ScriptsManager interface
- Context Injection: ScriptsManager automatically provides global variables (WORKSPACE_PATH, PARAMETERS, ENVIRONMENT, app)
- Proxy Integration: Script integrates with the main Flask application through route registration
- URL Access: ScriptsManager redirects user to proxy URL for script interface
- Session Management: Script maintains user session and project context through the proxy system
- Log Generation: Comprehensive logging throughout script execution with user context
- Graceful Cleanup: Script resources are cleaned up when user session ends or navigates away
User-Centric Logging Architecture
- User Isolation: Each user can only access their own execution logs
- Project Context: Logs are associated with specific user projects for better organization
- Real-time Streaming: Live log updates during script execution via WebSocket
- Persistent Storage: All execution logs are permanently stored for future reference
- Comprehensive Capture: Logs include:
- Standard Output: All script output and results
- Error Output: Error messages and stack traces
- Debug Information: Internal system messages and performance metrics
- Execution Context: Parameters, environment, duration, and exit codes
- Session Metadata: User, project, script, and interface information
- Post-Execution Access: Users can review logs after script completion
- Retention Management: Configurable log retention based on user level and storage policies
- Export Capabilities: Download logs in multiple formats for external analysis
ScriptsManager Proxy System
Proxy Architecture Overview
The ScriptsManager proxy system provides a unified interface for script execution without requiring individual Flask servers or dynamic port management. All scripts are integrated into the main ScriptsManager Flask application through a sophisticated proxy routing system.
How the Proxy System Works
1. Script Integration
- No Separate Servers: Scripts do not run independent Flask servers
- Direct Integration: Scripts register routes directly with the main Flask application
- Shared Application Context: All scripts share the same Flask application instance
- Automatic Context Injection: Essential variables are automatically provided to scripts
2. URL Routing Structure
Scripts are accessible through a standardized URL pattern:
http://localhost:5002/project/{project_id}/script/{script_id}/user/{user_id}/
URL Components:
localhost:5002
: ScriptsManager main application port (only external port needed)project/{project_id}
: Specific project context for data isolationscript/{script_id}
: Target script identifieruser/{user_id}
: User context for permissions and session management
3. Automatic Context Variables
When a script is executed through the proxy system, the following global variables are automatically available:
# Automatically injected by ScriptsManager proxy system
WORKSPACE_PATH # str: Path to script's workspace directory
PARAMETERS # dict: Parameters passed from ScriptsManager interface
ENVIRONMENT # dict: User and project context information
app # Flask: Main ScriptsManager Flask application instance
ENVIRONMENT Dictionary Contents:
{
'USER_ID': 'user123', # Current user identifier
'USER_LEVEL': 'operator', # User permission level
'PROJECT_ID': 'project456', # Current project identifier
'PROJECT_NAME': 'Water Analysis', # Human-readable project name
'USER_THEME': 'dark', # User's selected theme
'USER_LANGUAGE': 'es' # User's selected language
}
4. Script Implementation Pattern
Scripts must follow this standardized pattern for proxy compatibility:
"""
ScriptsManager Metadata:
@description: Your script description
@required_level: operator
@flask_port: auto
"""
# Proxy mode detection and configuration
try:
# These variables are automatically provided by ScriptsManager proxy
print(f"Workspace: {WORKSPACE_PATH}")
print(f"Parameters: {PARAMETERS}")
print(f"Environment: {ENVIRONMENT}")
# Extract commonly used values
USER_ID = ENVIRONMENT.get('USER_ID')
PROJECT_ID = ENVIRONMENT.get('PROJECT_ID')
PROJECT_NAME = ENVIRONMENT.get('PROJECT_NAME')
USER_LEVEL = ENVIRONMENT.get('USER_LEVEL')
PROXY_MODE = True
print("✅ Running in ScriptsManager proxy mode")
except NameError:
# Fallback for testing mode only
PROXY_MODE = False
print("⚠️ Running in testing mode")
# Set up testing variables...
# Flask routes using the provided 'app' instance
@app.route("/")
def script_main():
return "Your script interface"
@app.route("/calculate", methods=['POST'])
def calculate():
# Your script logic
pass
5. Session and Permission Management
- Automatic Authentication: Proxy system handles user authentication
- Permission Validation: Access control based on script requirements and user level
- Session Context: User sessions are maintained across the proxy system
- Project Isolation: Data and configurations are isolated by project
6. Benefits of Proxy System
- Simplified Deployment: No need to manage multiple Flask servers
- Unified Port Management: Only one external port (5002) required
- Automatic Context: Essential variables provided automatically
- Session Consistency: Unified session management across all scripts
- Simplified Development: Focus on script logic rather than Flask server management
- Resource Efficiency: Shared Flask application reduces resource overhead
Migration from Port-Based System
For existing scripts using the old port-based system:
- Remove Flask App Creation: Delete custom Flask app instantiation
- Remove Port Arguments: Remove port-related command line arguments
- Use Provided Context: Access WORKSPACE_PATH, PARAMETERS, ENVIRONMENT directly
- Register Routes: Use the provided
app
instance for route registration - Update Testing: Modify testing mode to work without custom Flask server
Core Features
1. Automatic Script Discovery
- Directory Structure: Scans
app/backend/script_groups/
for script collections - ⚠️ Important: Scripts must be located ONLY in
app/backend/script_groups/
- there should be NO separatebackend/script_groups/
directory - Metadata Support: Reads JSON configuration files for script descriptions and parameters
- Header Parsing: Automatically extracts script metadata from file headers on first discovery
- Dynamic Loading: Automatically detects new scripts without application restart
- File Types: Supports Python scripts (.py)
- Conda Environment Management: Each script group can use a different conda environment
- Environment Auto-Detection: Automatically detects available conda environments on Windows/Linux
- Script Metadata Editing: Developers and administrators can edit script descriptions and execution levels
- Web Interface Management: Automatically manages script integration with the main Flask application through the proxy system
2. User Management & Access Control
- User Levels:
admin
: Full access to all scripts, user management, and script metadata editingdeveloper
: Access to development and testing scripts, script metadata editing capabilitiesoperator
: Access to production and operational scriptsviewer
: Read-only access to logs and documentation
- Authentication: Simple login system with session management
- Permission System: Script-level permissions based on user roles
3. Multi-Language Support
- Primary Language: English (default)
- Supported Languages:
- Spanish (es)
- Italian (it)
- French (fr)
- Translation Files: JSON-based language files in
translations/
directory - Dynamic Switching: Language can be changed without logout
4. User Interface
- Theme Support: Light/Dark mode toggle with user preference persistence
- Responsive Design: Bootstrap-based responsive layout
- Real-time Updates: WebSocket integration for live log streaming
- Modern UI: Clean, intuitive interface with icons and visual feedback
- Script Metadata Editor: Inline editing capabilities for script descriptions and execution levels (developer+)
- Web Interface Lifecycle: Automatic management of script integration with the proxy system
Technical Architecture
⚠️ Directory Structure Clarification
IMPORTANT: All scripts must be located in app/backend/script_groups/
only. There should be NO separate backend/script_groups/
directory at the root level.
Correct Structure:
- ✅
app/backend/script_groups/
- Contains all script groups and scripts - ❌
backend/script_groups/
- Should NOT exist
Rationale: This ensures a clean separation of concerns and prevents confusion between the application backend and the scripts directory.
Backend Structure
app/
├── app.py # Main Flask application
├── config/
│ ├── config.py # Application configuration
│ ├── database.py # Database models and setup
│ └── permissions.py # Permission management
├── backend/
│ └── script_groups/ # Auto-discovered script directories
│ ├── group1/
│ │ ├── metadata.json # Group description and settings
│ │ ├── script1.py
│ │ ├── script2.sh
│ │ ├── docs/ # Markdown documentation files
│ │ │ ├── script1.md
│ │ │ ├── script1_es.md
│ │ │ ├── script1_it.md
│ │ │ └── script1_fr.md
│ │ └── scripts/
│ │ └── script_metadata.json
│ └── group2/
│ ├── metadata.json
│ └── scripts/
├── translations/
│ ├── en.json # English (default)
│ ├── es.json # Spanish
│ ├── it.json # Italian
│ └── fr.json # French
├── static/
│ ├── css/
│ │ ├── main.css
│ │ ├── themes.css # Light/dark themes
│ │ ├── responsive.css
│ │ └── markdown-viewer.css # Markdown styling
│ ├── js/
│ │ ├── main.js
│ │ ├── websocket.js # Real-time log updates
│ │ ├── theme-manager.js # Theme switching
│ │ ├── language-manager.js # Language switching
│ │ ├── markdown-viewer.js # Markdown rendering
│ │ └── markdown-editor.js # Markdown editing
│ ├── images/
│ │ └── SIDEL.png # SIDEL corporate logo
│ └── icons/
├── templates/
│ ├── base.html # Base template with theme/language support
│ ├── login.html # Authentication
│ ├── dashboard.html # Main script discovery interface
│ ├── script_group.html # Individual group view
│ ├── logs.html # User log viewer with filtering and search
│ ├── log_detail.html # Detailed log view for specific execution
│ └── admin/
│ ├── users.html # User management
│ ├── permissions.html # Permission management
│ └── system_logs.html # System-wide log management (admin only)
├── models/
│ ├── user.py # User model
│ ├── script.py # Script metadata model
│ └── execution_log.py # Enhanced execution log model with user context
├── services/
│ ├── script_discovery.py # Auto-discovery service with header parsing
│ ├── script_executor.py # Script execution service with comprehensive logging
│ ├── user_service.py # User management
│ ├── translation_service.py # Multi-language support
│ ├── conda_service.py # Conda environment management
│ ├── metadata_service.py # Script metadata management and editing
│ ├── script_proxy_service.py # Script proxy integration and routing
│ ├── data_manager.py # Data persistence and project management
│ ├── markdown_service.py # Markdown processing and editing
│ ├── log_service.py # User-centric log management service
│ ├── websocket_service.py # Real-time log streaming service
│ ├── tags_service.py # User tags management
│ └── backup_service.py # System backup management
└── utils/
├── permissions.py # Permission decorators
├── validators.py # Input validation
└── helpers.py # Utility functions
Database Schema
-- Users table
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
user_level VARCHAR(20) NOT NULL,
preferred_language VARCHAR(5) DEFAULT 'en',
preferred_theme VARCHAR(10) DEFAULT 'light',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_login TIMESTAMP,
is_active BOOLEAN DEFAULT TRUE
);
-- Script groups table
CREATE TABLE script_groups (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) NOT NULL,
directory_path VARCHAR(255) NOT NULL,
description TEXT,
required_level VARCHAR(20) NOT NULL,
conda_environment VARCHAR(100),
is_active BOOLEAN DEFAULT TRUE,
discovered_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Scripts table
CREATE TABLE scripts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id INTEGER REFERENCES script_groups(id),
filename VARCHAR(100) NOT NULL,
display_name VARCHAR(100),
description TEXT,
description_long_path VARCHAR(255),
tags TEXT, -- Comma-separated tags
required_level VARCHAR(20) NOT NULL,
parameters JSON,
is_active BOOLEAN DEFAULT TRUE,
last_modified TIMESTAMP
);
-- User script tags table (for user-specific tagging)
CREATE TABLE user_script_tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
script_id INTEGER REFERENCES scripts(id),
tags TEXT, -- Comma-separated user-specific tags
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(user_id, script_id)
);
-- Conda environments table
CREATE TABLE conda_environments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) UNIQUE NOT NULL,
path VARCHAR(255) NOT NULL,
python_version VARCHAR(20),
is_available BOOLEAN DEFAULT TRUE,
detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_verified TIMESTAMP
);
-- User projects table
CREATE TABLE user_projects (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users(id),
project_name VARCHAR(100) NOT NULL,
group_id INTEGER REFERENCES script_groups(id),
description TEXT,
is_default BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_accessed TIMESTAMP,
UNIQUE(user_id, project_name, group_id)
);
-- Script execution processes table
CREATE TABLE script_processes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
script_id INTEGER REFERENCES scripts(id),
user_id INTEGER REFERENCES users(id),
process_id INTEGER NOT NULL,
proxy_session_id VARCHAR(100),
status VARCHAR(20) NOT NULL,
started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_ping TIMESTAMP,
ended_at TIMESTAMP
);
-- Execution logs table (User-centric logging)
CREATE TABLE execution_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
script_id INTEGER REFERENCES scripts(id),
user_id INTEGER REFERENCES users(id),
project_id INTEGER REFERENCES user_projects(id),
session_id VARCHAR(100), -- Links to script interface session
execution_uuid VARCHAR(36) UNIQUE NOT NULL, -- Unique execution identifier
status VARCHAR(20) NOT NULL, -- 'running', 'completed', 'failed', 'terminated'
start_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
end_time TIMESTAMP,
duration_seconds INTEGER,
output TEXT, -- Standard output from script execution
error_output TEXT, -- Error output from script execution
debug_output TEXT, -- Debug information and internal logs
exit_code INTEGER,
parameters JSON, -- Execution parameters for reference
conda_environment VARCHAR(100), -- Environment used for execution
proxy_url VARCHAR(255), -- Proxy URL used for script interface
log_level VARCHAR(10) DEFAULT 'INFO', -- DEBUG, INFO, WARNING, ERROR
tags TEXT, -- Comma-separated tags for log categorization
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
JSON Configuration Files
Script Header Format for Auto-Discovery
"""
ScriptsManager Metadata:
@description: Creates a complete system backup with compression options
@description_long: docs/backup_system.md
@description_es: Crea una copia de seguridad completa del sistema con opciones de compresión
@description_long_es: docs/backup_system_es.md
@description_it: Crea un backup completo del sistema con opzioni di compressione
@description_long_it: docs/backup_system_it.md
@description_fr: Crée une sauvegarde complète du système avec options de compression
@description_long_fr: docs/backup_system_fr.md
@required_level: admin
@category: backup
@tags: system,maintenance,storage
@parameters: [
{
"name": "destination",
"type": "path",
"required": true,
"description": "Backup destination directory"
},
{
"name": "compression",
"type": "select",
"options": ["none", "gzip", "bzip2"],
"default": "gzip"
}
]
@execution_timeout: 3600
@flask_port: auto
"""
# Your script code here...
Group Metadata (metadata.json)
{
"name": "System Administration",
"description": {
"en": "System administration and maintenance scripts",
"es": "Scripts de administración y mantenimiento del sistema",
"it": "Script di amministrazione e manutenzione del sistema",
"fr": "Scripts d'administration et de maintenance du système"
},
"icon": "server",
"required_level": "operator",
"category": "system",
"conda_environment": "base",
"auto_discovery": true,
"execution_timeout": 300
}
Script Metadata (script_metadata.json)
{
"backup_system.py": {
"display_name": {
"en": "System Backup",
"es": "Copia de Seguridad del Sistema",
"it": "Backup del Sistema",
"fr": "Sauvegarde du Système"
},
"description": {
"en": "Creates a complete system backup",
"es": "Crea una copia de seguridad completa del sistema",
"it": "Crea un backup completo del sistema",
"fr": "Crée une sauvegarde complète du système"
},
"description_long": {
"en": "docs/backup_system.md",
"es": "docs/backup_system_es.md",
"it": "docs/backup_system_it.md",
"fr": "docs/backup_system_fr.md"
},
"tags": ["system", "maintenance", "storage"],
"required_level": "admin",
"category": "backup",
"parameters": [
{
"name": "destination",
"type": "path",
"required": true,
"description": {
"en": "Backup destination directory",
"es": "Directorio de destino de la copia",
"it": "Directory di destinazione del backup",
"fr": "Répertoire de destination de la sauvegarde"
}
},
{
"name": "compression",
"type": "select",
"options": ["none", "gzip", "bzip2"],
"default": "gzip",
"description": {
"en": "Compression method",
"es": "Método de compresión",
"it": "Metodo di compressione",
"fr": "Méthode de compression"
}
}
],
"execution_timeout": 3600,
"flask_port": "auto",
"proxy_enabled": true,
"requires_parameters": true
}
}
API Endpoints
Authentication
POST /api/auth/login
- User loginPOST /api/auth/logout
- User logoutGET /api/auth/user
- Get current user info
Script Discovery & Management
GET /api/script-groups
- List all discovered script groupsGET /api/script-groups/{group_id}/scripts
- List scripts in groupPOST /api/scripts/{script_id}/execute
- Execute scriptGET /api/scripts/{script_id}/status
- Get script execution statusPOST /api/scripts/{script_id}/stop
- Stop running scriptGET /api/scripts/{script_id}/metadata
- Get script metadataPUT /api/scripts/{script_id}/metadata
- Update script metadata (developer+)POST /api/scripts/{script_id}/refresh-metadata
- Re-parse script headerGET /api/script-groups/{group_id}/refresh
- Refresh all scripts in group
Conda Environment Management
GET /api/conda/environments
- List all available conda environmentsPOST /api/conda/refresh
- Refresh conda environment detectionGET /api/conda/environments/{env_name}/info
- Get environment detailsPOST /api/script-groups/{group_id}/conda
- Set conda environment for group
User Management (Admin only)
GET /api/admin/users
- List all usersPOST /api/admin/users
- Create new userPUT /api/admin/users/{user_id}
- Update userDELETE /api/admin/users/{user_id}
- Delete userGET /api/admin/users/{user_id}/projects
- Get user's projectsPOST /api/admin/users/{user_id}/reset-data
- Reset user's data directories
Project Management
GET /api/projects
- List current user's projectsPOST /api/projects
- Create new project for current userPUT /api/projects/{project_id}
- Update project detailsDELETE /api/projects/{project_id}
- Delete project and associated dataPOST /api/projects/{project_id}/set-active
- Set active project for sessionGET /api/projects/active
- Get current active project
Data Management
GET /api/data/{group_id}/{project_id}/files
- List data files for projectGET /api/data/{group_id}/{project_id}/{filename}
- Get specific data filePOST /api/data/{group_id}/{project_id}/{filename}
- Create/Update data fileDELETE /api/data/{group_id}/{project_id}/{filename}
- Delete data filePOST /api/data/{group_id}/{project_id}/backup
- Create project data backup
Script Documentation & Tags
GET /api/scripts/{script_id}/description-long
- Get script long description (Markdown)PUT /api/scripts/{script_id}/description-long
- Update script long description (developer+)GET /api/scripts/{script_id}/tags
- Get script tagsPUT /api/scripts/{script_id}/tags
- Update user-specific script tagsGET /api/scripts/search-tags
- Search scripts by tagsGET /api/user/tags
- Get all user's tags across scripts
System Backup
POST /api/system/backup
- Create immediate system backupGET /api/system/backups
- List available system backupsDELETE /api/system/backups/{backup_date}
- Delete specific backupGET /api/system/backup-status
- Get backup service status
Logs & Monitoring (User-Centric)
GET /api/logs/execution
- Get current user's execution logs (paginated)GET /api/logs/execution/{execution_id}
- Get specific execution log detailsGET /api/logs/execution/{execution_id}/download
- Download execution log fileGET /api/logs/script/{script_id}
- Get user's logs for specific scriptGET /api/logs/project/{project_id}
- Get user's logs for specific projectGET /api/logs/search
- Search user's logs with filters (script, date, status, text)DELETE /api/logs/execution/{execution_id}
- Delete specific execution log (user owns)POST /api/logs/cleanup
- Clean up old logs based on retention policyGET /api/logs/stats
- Get user's logging statistics and usageWebSocket /ws/logs/{execution_id}
- Real-time log streaming for specific executionWebSocket /ws/logs/user
- Real-time log streaming for all user executions
Admin Logs & Monitoring (Admin only)
GET /api/admin/logs/execution
- Get all users' execution logsGET /api/admin/logs/system
- Get system-wide logsGET /api/admin/logs/user/{user_id}
- Get specific user's logsDELETE /api/admin/logs/user/{user_id}
- Delete all logs for specific userGET /api/admin/logs/audit
- Get system audit logsPOST /api/admin/logs/export
- Export system logs in various formats
Script Proxy Management
POST /api/scripts/{script_id}/proxy-access
- Initialize script proxy access for userGET /api/scripts/{script_id}/proxy-status
- Check script proxy status for userGET /api/scripts/{script_id}/proxy-url
- Get proxy URL for script accessPOST /api/scripts/{script_id}/proxy-cleanup
- Clean up script proxy sessionGET /api/active-proxies
- List all active script proxy sessions
Internationalization
GET /api/i18n/languages
- Get available languagesGET /api/i18n/{language}
- Get translations for languagePOST /api/user/preferences
- Update user preferences
Features Implementation
1. Script Auto-Discovery Service with Header Parsing
# Pseudo-code for auto-discovery with header parsing
class ScriptDiscoveryService:
def scan_script_groups(self):
# Scan app/backend/script_groups/ directory
# Read metadata.json files
# Detect script files
# Parse script headers for metadata on first discovery
# Update database with discovered scripts
# Handle permission assignments
# Validate conda environment assignments
def parse_script_header(self, script_path):
# Extract metadata from script comments/docstrings
# Parse description, required_level, parameters
# Return structured metadata
def watch_for_changes(self):
# File system watcher for real-time discovery
# Hot-reload capability
# Re-parse headers when files change
2. Conda Environment Service
# Pseudo-code for conda management
class CondaService:
def detect_environments(self):
# Scan for conda installations (Windows/Linux)
# Parse conda environment list
# Validate environment availability
# Update database with environment info
def get_environment_info(self, env_name):
# Get Python version and packages
# Verify environment is functional
# Return environment details
def execute_in_environment(self, env_name, command, args):
# Activate conda environment
# Execute command with proper environment
# Handle cross-platform differences
3. Script Execution Service with Proxy Integration
# Pseudo-code for script execution with proxy system
class ScriptExecutor:
def execute_script(self, script_id, parameters, user_id, project_id=None):
# Validate user permissions
# Get or create user project (default if none specified)
# Get user preferences (theme, language)
# Get script group conda environment
# Prepare execution environment with conda
# Create user-specific workspace directory
# Prepare script context variables:
# - WORKSPACE_PATH: User/project workspace directory
# - PARAMETERS: Script parameters from interface
# - ENVIRONMENT: User and project context
# - app: Main Flask application instance
# Inject context into script namespace
# Import and initialize script with proxy integration
# Register script routes with proxy URL prefix
# Generate proxy URL for user access
# Stream output via WebSocket
# Log execution details with user context
# Monitor user session for cleanup
def prepare_script_context(self, script_path, workspace_dir, user_context, parameters):
# Prepare global variables for script injection:
# WORKSPACE_PATH = workspace_dir
# PARAMETERS = parameters
# ENVIRONMENT = {
# 'USER_ID': user_context.user_id,
# 'USER_LEVEL': user_context.user_level,
# 'PROJECT_ID': user_context.project_id,
# 'PROJECT_NAME': user_context.project_name,
# 'USER_THEME': user_context.theme,
# 'USER_LANGUAGE': user_context.language
# }
# app = main_flask_application
def integrate_script_with_proxy(self, script_id, script_module, user_id, project_id):
# Register script routes with proxy URL prefix
# Route format: /project/{project_id}/script/{script_id}/user/{user_id}/
# Handle script context and session management
# Provide unified Flask application access
### 4. Multi-Language Support
```python
# Pseudo-code for translations
class TranslationService:
def get_translation(self, key, language='en'):
# Load translation file for language
# Return translated string
# Fall back to English if not found
def get_user_language(self, user):
# Return user's preferred language
5. Script Metadata Management Service
# Pseudo-code for metadata management
class MetadataService:
def update_script_metadata(self, script_id, metadata, user):
# Validate user permissions (developer+)
# Update script description, required_level, etc.
# Maintain version history
# Validate required_level values
def parse_and_update_from_header(self, script_id):
# Re-parse script file header
# Update metadata from parsed content
# Preserve user-edited fields
def get_editable_fields(self, user_level):
# Return list of fields user can edit
# Based on permission level
6. Web Interface Manager Service
# Pseudo-code for web interface lifecycle
class WebInterfaceManager:
def start_script_interface(self, script_id, user_id):
# Allocate port from available pool
# Start Flask process for script
# Register in active interfaces table
# Generate session ID for tab tracking
# Return interface URL and session ID
def monitor_tab_session(self, session_id):
# Monitor tab heartbeat via JavaScript
# Detect tab closure
# Trigger graceful script shutdown
def cleanup_inactive_interfaces(self):
# Periodic cleanup of orphaned processes
# Check for unresponsive tabs
# Terminate associated script processes
def get_available_port(self):
# Find available port in configured range
# Avoid conflicts with existing interfaces
# Return available port number
### 8. Data Management Service
```python
# Pseudo-code for data persistence management
class DataManager:
def __init__(self, base_data_path):
self.base_path = base_data_path
def get_user_project_path(self, user_id, group_id, project_name):
# Returns: data/script_groups/group_{group_id}/user_{user_id}/{project_name}/
return os.path.join(
self.base_path,
"script_groups",
f"group_{group_id}",
f"user_{user_id}",
project_name
)
def ensure_project_directory(self, user_id, group_id, project_name):
# Create directory structure if it doesn't exist
# Initialize default configuration files
# Set proper permissions
def get_config_file(self, user_id, group_id, project_name, filename):
# Load JSON configuration file
# Return parsed data or default values
def save_config_file(self, user_id, group_id, project_name, filename, data):
# Save JSON data to configuration file
# Validate data structure
# Handle concurrent access
def list_user_projects(self, user_id, group_id):
# List all projects for user in specific group
# Return project metadata
def create_default_project(self, user_id, group_id):
# Create 'project_default' for new users
# Initialize with default settings
def backup_project_data(self, user_id, group_id, project_name):
# Create timestamped backup of project data
# Compress and store in backups directory
9. Script Proxy Service
# Pseudo-code for script proxy integration
class ScriptProxyService:
def __init__(self, main_flask_app):
self.app = main_flask_app
self.active_scripts = {}
def register_script_routes(self, script_id, script_module):
# Register script routes with main Flask application
# Add proxy URL prefix for script routing
# Handle script context injection
def create_script_context(self, user_id, project_id, script_id):
# Prepare WORKSPACE_PATH, PARAMETERS, ENVIRONMENT
# Inject context variables into script namespace
# Return script execution context
def get_proxy_url(self, user_id, project_id, script_id):
# Generate proxy URL for script access
# Format: /project/{project_id}/script/{script_id}/user/{user_id}/
# Return complete proxy URL
def validate_script_access(self, user_id, script_id):
# Validate user permissions for script
# Check script requirements vs user level
# Return access authorization status
def cleanup_script_session(self, user_id, script_id):
# Clean up script session resources
# Remove temporary files and context
# Log session completion
### 10. Markdown Processing Service
```python
# Pseudo-code for markdown management
class MarkdownService:
def __init__(self, script_groups_path):
self.base_path = script_groups_path
def get_script_long_description(self, script_id, language='en'):
# Get script metadata and long description path
# Load markdown file for specified language
# Return markdown content or fallback to English
def update_script_long_description(self, script_id, markdown_content, language='en'):
# Validate user permissions (developer+)
# Save markdown content to appropriate file
# Update script metadata with new path
def render_markdown_to_html(self, markdown_content):
# Convert markdown to HTML
# Apply syntax highlighting for code blocks
# Handle mathematical formulas if needed
# Return rendered HTML
def get_available_languages_for_script(self, script_id):
# Return list of available language versions
# Check for existing markdown files
11. User Tags Service
# Pseudo-code for user tags management
class TagsService:
def get_user_script_tags(self, user_id, script_id):
# Get user-specific tags for a script
# Return list of tags
def update_user_script_tags(self, user_id, script_id, tags):
# Update user-specific tags for a script
# Validate and clean tag format
# Save to database
def search_scripts_by_tags(self, user_id, tags, match_all=False):
# Search scripts by user tags
# match_all: True for AND operation, False for OR
# Return list of matching script IDs
def get_all_user_tags(self, user_id):
# Get all unique tags used by user
# Return sorted list of tags with usage count
def get_script_system_tags(self, script_id):
# Get system-defined tags from script metadata
# Return list of system tags
def merge_script_tags(self, user_id, script_id):
# Combine system tags and user tags
# Return unified tag list with source indication
12. System Backup Service
# Pseudo-code for system backup management
class BackupService:
def __init__(self, data_path, backup_path):
self.data_path = data_path
self.backup_path = backup_path
def create_daily_backup(self):
# Create compressed backup of entire /data/ directory
# Use current date as backup name: /backup/2025-09-11/
# Compress using gzip or zip
# Clean up old backups based on retention policy
def schedule_daily_backup(self):
# Set up automatic daily backup at configured time
# Use threading.Timer or scheduling library
# Handle backup rotation and cleanup
def list_available_backups(self):
# Return list of available backup dates
# Include backup size and creation time
def delete_backup(self, backup_date):
# Delete specific backup by date
# Validate backup exists before deletion
def get_backup_status(self):
# Return current backup service status
# Include last backup time, next scheduled backup
# Disk usage information
def restore_from_backup(self, backup_date, target_path=None):
# Restore data from specific backup
# Optional: restore to different location
# Admin-only operation
### 13. Log Management Service
```python
# Pseudo-code for user-centric log management
class LogService:
def __init__(self, base_log_path, database):
self.base_path = base_log_path
self.db = database
def create_execution_log(self, script_id, user_id, project_id, session_id):
# Create new execution log entry in database
# Generate unique execution UUID
# Set up log file path: logs/executions/user_{user_id}/YYYY-MM-DD/execution_{uuid}.log
# Return execution log ID and file path
def log_execution_start(self, execution_id, parameters, conda_env, port):
# Log execution start with full context
# Record parameters, environment, and execution setup
# Start real-time log streaming setup
def append_log_output(self, execution_id, output_type, content):
# Append output to log file and database
# output_type: 'stdout', 'stderr', 'debug', 'system'
# Handle real-time streaming to connected WebSocket clients
# Update database with latest output
def log_execution_end(self, execution_id, exit_code, duration):
# Mark execution as completed
# Calculate final statistics
# Close log file and update database
# Notify connected clients of completion
def get_user_logs(self, user_id, filters=None, page=1, per_page=50):
# Get paginated list of user's execution logs
# Apply filters: script_id, project_id, date_range, status
# Return logs with metadata (script name, project, duration, etc.)
def get_execution_log_detail(self, execution_id, user_id):
# Get detailed log information for specific execution
# Validate user owns the log
# Return full log content and metadata
def search_user_logs(self, user_id, search_query, filters=None):
# Full-text search across user's log content
# Search in output, parameters, and metadata
# Return matching executions with highlighted snippets
def delete_user_log(self, execution_id, user_id):
# Delete specific execution log (user validation)
# Remove from database and file system
# Update storage statistics
def export_user_logs(self, user_id, format='json', filters=None):
# Export user's logs in specified format (json, csv, txt)
# Apply optional filters
# Return file path or stream for download
def cleanup_old_logs(self, retention_days=30):
# Clean up logs older than retention period
# Respect user-level retention policies
# Update storage statistics
def get_user_log_statistics(self, user_id):
# Return user's logging statistics
# Total executions, success rate, storage usage
# Most used scripts and projects
### 14. WebSocket Log Streaming Service
```python
# Pseudo-code for real-time log streaming
class WebSocketLogService:
def __init__(self, socketio_instance):
self.socketio = socketio_instance
self.active_connections = {} # execution_id -> [client_sessions]
def connect_to_execution_log(self, client_session, execution_id, user_id):
# Validate user access to execution log
# Add client to active connections for this execution
# Send initial log content to client
# Set up real-time streaming
def broadcast_log_update(self, execution_id, log_content, output_type):
# Send log update to all connected clients for this execution
# Format message with timestamp and output type
# Handle client disconnections gracefully
def disconnect_from_execution_log(self, client_session, execution_id):
# Remove client from active connections
# Clean up resources if no more clients connected
def connect_to_user_logs(self, client_session, user_id):
# Connect to all active executions for user
# Send updates for any of user's running scripts
# Useful for dashboard real-time updates
def get_active_connections_count(self, execution_id):
# Return number of clients watching this execution
# Used for resource management decisions
# Pseudo-code for multi-user session handling
class SessionManager:
def create_user_session(self, user_id, active_project_id=None):
# Create session context for user
# Set active project (default if none specified)
# Initialize user-specific settings
def get_user_context(self, session_id):
# Return user context including:
# - User ID and level
# - Active project
# - Permissions
# - Preferences
def switch_project(self, session_id, project_id):
# Change active project for session
# Validate user access to project
# Update session context
def cleanup_inactive_sessions(self):
# Remove expired sessions
# Release associated resources
# Pseudo-code for permissions
def require_permission(required_level):
def decorator(func):
def wrapper(*args, **kwargs):
# Check user authentication
# Validate user level
# Allow or deny access
return wrapper
return decorator
def can_edit_metadata(user_level):
# Returns True if user can edit script metadata
return user_level in ['developer', 'admin']
Conda Environment Management
Environment Detection
- Automatic Discovery: Scans system for conda installations
- Cross-Platform Support: Works on both Windows and Linux environments
- Environment Validation: Verifies each environment is functional
- Python Version Detection: Identifies Python version for each environment
- Package Information: Optional package listing for environment details
Environment Assignment
- Group-Level Configuration: Each script group can use a different conda environment
- Default Environment: Falls back to 'base' environment if none specified
- Environment Validation: Ensures assigned environment exists before script execution
- Dynamic Updates: Environment assignments can be changed without restart
Execution Integration
- Seamless Activation: Scripts execute within their assigned conda environment
- Cross-Platform Commands: Handles conda activation differences between Windows/Linux
- Error Handling: Graceful fallback if environment becomes unavailable
- Logging: Environment activation and execution logged for troubleshooting
Management Interface
- Environment List: Visual display of all detected conda environments
- Assignment Interface: Dropdown selection for script group environment assignment
- Status Indicators: Visual indicators for environment availability
- Refresh Capability: Manual refresh of environment detection
User Interface Components
SIDEL Corporate Branding Integration
- SIDEL Logo:
app/static/images/SIDEL.png
displayed prominently in application header - Corporate Identity: Consistent SIDEL branding across all pages and script interfaces
- Logo Propagation: SIDEL logo passed to script interfaces for consistent branding
- Responsive Logo: Logo adapts to different screen sizes and theme modes
1. Dashboard (Multi-User)
- SIDEL Header: SIDEL logo and corporate branding in main navigation
- User Context: Display current user information and active project
- Script Group Cards: Visual representation of script groups with icons
- Project Selector: Dropdown to switch between user's projects
- Quick Actions: Recently used scripts and favorites (user-specific)
- System Status: System health indicators and conda environment status
- User Info: Current user, language, theme settings, and active project
- Environment Selector: Quick conda environment overview per group
- Active Sessions: List of user's currently active script proxy sessions
2. Script Group View (Project-Aware with Enhanced Documentation)
- SIDEL Branding: Consistent SIDEL logo and corporate identity
- Project Context: Display current active project at top
- Script List: Filterable and searchable script listing with tag filtering
- Tag Management: Add/edit user-specific tags for scripts
- Documentation Viewer: Integrated Markdown viewer for long descriptions
- Multi-Language Documentation: Switch between available language versions
- Execution Forms: Dynamic forms based on script parameters
- Inline Help: Short descriptions with expandable long descriptions
- Execution History: Previous runs for current user/project
- Environment Info: Display active conda environment for the group
- Environment Management: Change conda environment (admin/developer only)
- Metadata Editor: Inline editing for script descriptions and levels (developer+)
- Markdown Editor: Edit long descriptions in Markdown format (developer+)
- Header Re-parsing: Refresh metadata from script headers
- Active Interfaces: List of currently active script proxy sessions for user
- Project Management: Create, switch, or manage projects (inline controls)
3. Log Viewer (User-Centric)
- User-Isolated Logs: Each user can only view their own script execution logs
- Project-Specific Logs: Logs are organized by user projects for better context
- Real-time Logs: Live log streaming via WebSocket during script execution
- Post-Execution Logs: Persistent log storage for reviewing past script executions
- Execution Status Indicators: Visual status badges (Running, Completed, Failed, Terminated)
- Log Filtering & Search:
- Filter by script name, project, execution date range
- Filter by execution status and duration
- Full-text search across log content and parameters
- Save and reuse filter presets
- Log List View:
- Paginated table with execution summary
- Columns: Script Name, Project, Start Time, Duration, Status, Actions
- Sortable by any column
- Quick preview of output and errors
- Detailed Log View:
- Full execution log with syntax highlighting
- Tabbed interface: Output, Errors, Debug, Parameters, Environment
- Execution timeline and performance metrics
- Download individual log files
- Real-time Monitoring:
- Live updates during script execution
- Progress indicators and execution timeline
- Auto-scroll with pause/resume controls
- Multiple execution monitoring in tabs
- Export & Management:
- Download logs in various formats (TXT, JSON, CSV, PDF)
- Bulk operations: select multiple logs for export or deletion
- Log retention settings per user
- Storage usage statistics and cleanup recommendations
- Integration Features:
- Quick navigation to script or project from log entry
- Re-run script with same parameters from log
- Share log details with developers (admin only)
- Log bookmarking and notes
4. Admin Panel (Enhanced)
- User Management: CRUD operations for users
- User Data Management: View and manage user projects and data
- Permission Matrix: Visual permission assignment
- System Configuration: Application settings
- Discovery Management: Manual script discovery triggers
- Environment Management: Conda environment detection and assignment
- Interface Monitoring: Active script proxy session management (all users)
- Metadata Audit: Track metadata changes
- Proxy Session Management: Monitor and manage proxy sessions across users
- Data Directory Overview: System-wide view of user data usage
- Backup Management: System backup configuration and monitoring
- Documentation Management: Overview of script documentation status
Engineering-Focused Features
Multi-Language Technical Documentation
- Markdown Support: Full Markdown rendering with syntax highlighting for code blocks
- Multi-Language Documentation: Support for technical documentation in multiple languages
- Mathematical Formulas: Support for LaTeX/MathJax mathematical expressions in documentation
- Technical Diagrams: Embedding of diagrams and flowcharts in Markdown
- Code Examples: Syntax-highlighted code examples in documentation
User-Centric Organization
- Personal Tagging System: Each user can tag scripts with custom labels for personal organization
- Search by Tags: Quick filtering and searching of scripts using personal tags
- Algorithm Categorization: Organize engineering algorithms by domain, complexity, or use case
- Long-Term Accessibility: Documentation designed for easy understanding after extended periods
Documentation Structure
app/backend/script_groups/thermal_analysis/
├── metadata.json
├── heat_transfer.py
├── thermal_stress.py
├── docs/
│ ├── heat_transfer.md # English documentation
│ ├── heat_transfer_es.md # Spanish documentation
│ ├── heat_transfer_it.md # Italian documentation
│ ├── heat_transfer_fr.md # French documentation
│ ├── thermal_stress.md
│ ├── thermal_stress_es.md
│ ├── thermal_stress_it.md
│ └── thermal_stress_fr.md
└── examples/
├── heat_exchanger_example.json
└── stress_analysis_case.json
Backup and Data Integrity
- Daily System Backup: Automatic compression and backup of entire data directory
- Simple Backup Structure:
/backup/YYYY-MM-DD/
format for easy manual recovery - Data Persistence: All configurations and results preserved across sessions
- Project Isolation: Each engineering project maintains independent configurations
Engineering Workflow Integration
- Parameter Persistence: Engineering calculations and configurations saved per project
- Result Documentation: Integration with script-generated reports and outputs
- Configuration Templates: Reusable parameter sets for common engineering scenarios
- Cross-Reference Documentation: Link related algorithms and reference implementations
System Backup Architecture
Backup Directory Structure
backup/
├── 2025-09-11/
│ └── data_backup_20250911_020000.tar.gz
├── 2025-09-10/
│ └── data_backup_20250910_020000.tar.gz
├── 2025-09-09/
│ └── data_backup_20250909_020000.tar.gz
└── backup_logs/
├── backup_20250911.log
├── backup_20250910.log
└── backup_20250909.log
Backup Service Configuration
{
"backup": {
"enabled": true,
"schedule_time": "02:00",
"retention_days": 30,
"compression": "gzip",
"backup_path": "./backup",
"exclude_patterns": ["*.tmp", "*.log", "__pycache__"],
"max_backup_size_gb": 10
}
}
User-Centric Log Management System
Log Organization Structure
- User-Isolated Storage: Each user's logs stored separately to ensure privacy
- Project-Based Organization: Logs grouped by user projects for better context
- Date-Based Hierarchy: Daily directories for efficient log retrieval
- Unique Execution IDs: Each script execution gets a UUID for precise tracking
Log Content Structure
{
"execution_id": "abc123-def456-ghi789",
"script_id": 42,
"script_name": "data_analysis.py",
"user_id": 5,
"username": "john_doe",
"project_id": 12,
"project_name": "monthly_reports",
"execution_start": "2025-09-12T10:30:00Z",
"execution_end": "2025-09-12T10:35:30Z",
"duration_seconds": 330,
"status": "completed",
"exit_code": 0,
"parameters": {
"input_file": "data/reports/raw_data.csv",
"output_format": "excel",
"date_range": "2025-09-01_to_2025-09-11"
},
"environment": {
"conda_env": "data_analysis",
"python_version": "3.11.4",
"flask_port": 5023
},
"output": {
"stdout": "Processing 1000 records...\nAnalysis complete.\nReport saved to output/monthly_report_2025-09.xlsx",
"stderr": "",
"debug": "Memory usage: 45MB\nExecution time breakdown: Data loading (2.3s), Processing (4.1s), Export (1.2s)"
},
"session_info": {
"browser_session": "sess_xyz789",
"interface_url": "http://127.0.0.1:5023",
"real_time_viewers": 1
}
}
Real-time Log Streaming
- WebSocket Integration: Live updates during script execution
- Multiple Viewer Support: Multiple users can monitor same execution (admin)
- Selective Streaming: Choose which output types to stream (stdout, stderr, debug)
- Bandwidth Optimization: Compress and buffer log updates for efficiency
Log Retention & Cleanup
- User Level Based Retention:
viewer
: 7 daysoperator
: 30 daysdeveloper
: 90 daysadmin
: 365 days (configurable)
- Automatic Cleanup: Daily maintenance job removes expired logs
- Manual Management: Users can delete their own logs before expiration
- Storage Quotas: Per-user storage limits with notifications
Log Search & Analytics
- Full-Text Search: Search across all log content using database indexing
- Advanced Filters: Combine multiple criteria (date, script, status, duration)
- Execution Statistics: Success rates, average durations, most used scripts
- Trend Analysis: Execution patterns and performance trends over time
Log Security & Privacy
- User Isolation: Strict access control - users see only their own logs
- Admin Override: Administrators can access any user's logs for debugging
- Audit Trail: Log access events are recorded for security auditing
- Data Protection: Sensitive parameters can be masked in log storage
Script Standard and Interface Contract
Required Script Parameters
Every script must accept the following command-line parameters:
python script.py --data-dir <path> --user-level <level> --port <number> --project-id <id> --project-name <name> --theme <theme> --language <lang>
Parameter Details:
--data-dir
: Absolute path to user/project data directory--user-level
: User permission level (admin
,developer
,operator
,viewer
)--port
: Assigned Flask port number for the script's web interface--project-id
: Current project identifier for data isolation--project-name
: Human-readable project name for display in script frontend--theme
: Current user theme (light
,dark
) for consistent UI appearance--language
: Current user language (en
,es
,it
,fr
) for localized interfaces
Script Implementation Template
import argparse
import os
import json
from flask import Flask, render_template, request, jsonify
def parse_arguments():
parser = argparse.ArgumentParser(description='SIDEL ScriptsManager Script')
parser.add_argument('--data-dir', required=True, help='Data directory path')
parser.add_argument('--user-level', required=True, choices=['admin', 'developer', 'operator', 'viewer'])
parser.add_argument('--port', type=int, required=True, help='Flask port number')
parser.add_argument('--project-id', required=True, help='Project identifier')
parser.add_argument('--project-name', required=True, help='Project display name')
parser.add_argument('--theme', required=True, choices=['light', 'dark'], help='Current user theme')
parser.add_argument('--language', required=True, choices=['en', 'es', 'it', 'fr'], help='Current user language')
return parser.parse_args()
class ScriptDataManager:
def __init__(self, data_dir, project_id, project_name):
self.data_dir = data_dir
self.project_id = project_id
self.project_name = project_name
self.ensure_data_structure()
def ensure_data_structure(self):
"""Create data directory structure if it doesn't exist"""
os.makedirs(self.data_dir, exist_ok=True)
def load_config(self, filename='config.json'):
"""Load configuration from JSON file"""
config_path = os.path.join(self.data_dir, filename)
if os.path.exists(config_path):
with open(config_path, 'r') as f:
return json.load(f)
return {}
def save_config(self, config, filename='config.json'):
"""Save configuration to JSON file"""
config_path = os.path.join(self.data_dir, filename)
with open(config_path, 'w') as f:
json.dump(config, f, indent=2)
def create_flask_app(data_manager, user_level, project_id, project_name, theme, language):
app = Flask(__name__)
# SIDEL logo path for consistent branding
sidel_logo = '/static/images/SIDEL.png'
@app.route('/')
def index():
config = data_manager.load_config()
return render_template('index.html',
config=config,
user_level=user_level,
project_id=project_id,
project_name=project_name,
theme=theme,
language=language,
sidel_logo=sidel_logo)
@app.route('/api/config', methods=['GET', 'POST'])
def handle_config():
if request.method == 'GET':
return jsonify(data_manager.load_config())
else:
config = request.json
data_manager.save_config(config)
return jsonify({'status': 'success'})
@app.route('/api/project-info')
def get_project_info():
return jsonify({
'project_id': project_id,
'project_name': project_name,
'user_level': user_level,
'theme': theme,
'language': language,
'sidel_logo': sidel_logo
})
return app
if __name__ == '__main__':
args = parse_arguments()
# Initialize data manager with project information
data_manager = ScriptDataManager(args.data_dir, args.project_id, args.project_name)
# Create Flask application with SIDEL branding, project context, theme and language
app = create_flask_app(data_manager, args.user_level, args.project_id, args.project_name, args.theme, args.language)
# Run Flask server
print(f"Starting SIDEL script for project: {args.project_name} (Theme: {args.theme}, Language: {args.language})")
app.run(host='0.0.0.0', port=args.port, debug=False)
Data Management Guidelines
- Use Provided Data Directory: Always use the
--data-dir
parameter for persistent storage - JSON Configuration: Store settings in JSON files for easy management
- User Level Awareness: Adapt interface based on user permission level
- Project Isolation: Use project ID to separate data when needed
- Project Display: Use project name for user-friendly display in interfaces
- SIDEL Branding: Include SIDEL logo and corporate branding in all interfaces
- Theme Consistency: Apply the provided theme (
light
/dark
) to maintain visual consistency - Language Localization: Use the provided language parameter for interface localization
- Error Handling: Gracefully handle missing or corrupted data files
Flask Interface Requirements
- Port Binding: Must bind to the exact port provided by SIDEL ScriptsManager
- Docker Host Binding: Must bind to
0.0.0.0
when running in Docker containers to allow external access - Local Development: Can use
127.0.0.1
for direct host execution, but0.0.0.0
is recommended for consistency - Graceful Shutdown: Handle SIGTERM for clean shutdown
- Session Management: Maintain user context throughout session
- Error Reporting: Report errors through standard logging
- SIDEL Branding: Include SIDEL logo and consistent visual identity
- Project Context: Display project name prominently in interface
- Theme Consistency: Apply the provided theme (light/dark) throughout the interface
- Language Support: Use the provided language for interface localization and messages
Docker Networking Requirements
For proper Docker deployment, SIDEL ScriptsManager uses standard networking since scripts are integrated through the proxy system:
- Standard Docker Networking: The main application container uses standard bridge networking
- Single External Port: Only port 5002 needs to be exposed for the main ScriptsManager application
- Proxy System Integration: Scripts are accessed through proxy URLs, not independent ports
- Database Connectivity: PostgreSQL remains in bridge network mode with port mapping for isolation
- Simplified Port Management: No dynamic port ranges needed since scripts integrate with main application
Example Docker Compose Configuration
services:
scriptsmanager:
ports:
- "5002:5002" # Only main application port needed
environment:
- DATABASE_URL=postgresql://user:pass@postgres:5432/db
networks:
- scriptsmanager_network
postgres:
ports:
- "5432:5432"
networks:
- scriptsmanager_network
networks:
scriptsmanager_network:
driver: bridge
Benefits of Proxy System
- Simplified Configuration: No need to manage script port ranges
- Single Entry Point: All access through main application port (5002)
- Better Security: Unified authentication and access control
- Easier Deployment: Standard Docker networking without host mode complexity
Multi-User Data Architecture
Data Directory Structure
data/
├── script_groups/
│ ├── group_analytics/
│ │ ├── user_john/
│ │ │ ├── project_default/
│ │ │ │ ├── config.json
│ │ │ │ ├── datasets.json
│ │ │ │ └── analysis_results.json
│ │ │ ├── project_monthly_report/
│ │ │ │ ├── config.json
│ │ │ │ └── report_data.json
│ │ │ └── project_customer_analysis/
│ │ │ └── config.json
│ │ └── user_mary/
│ │ ├── project_default/
│ │ │ └── config.json
│ │ └── project_experimental/
│ │ ├── config.json
│ │ └── experiments.json
│ └── group_automation/
│ ├── user_john/
│ │ └── project_default/
│ │ ├── workflows.json
│ │ └── schedules.json
│ └── user_admin/
│ └── project_system_maintenance/
│ └── maintenance_config.json
├── backups/
│ ├── user_john_project_monthly_report_20250911_143022.zip
│ └── user_mary_project_experimental_20250910_091545.zip
└── system/
├── proxy_sessions.json
└── active_sessions.json
Project Management Workflow
- User Login: ScriptsManager creates user session
- Project Selection: User selects or creates project
- Script Execution: ScriptsManager passes project-specific data directory
- Data Persistence: Script manages its own JSON files within provided directory
- Session Continuity: Project context maintained across script executions
- Data Backup: Automatic and manual backup capabilities
Script Proxy Lifecycle Management
Proxy Integration
- Context Injection: Automatically provides WORKSPACE_PATH, PARAMETERS, ENVIRONMENT, and app variables
- Route Registration: Scripts register routes with main Flask application using proxy URL prefix
- Session Tracking: Generates unique session IDs for user proxy access monitoring
- Automatic Access: Redirects user to proxy URL upon script execution
Session Monitoring
- User Session Tracking: Monitors user access to proxy URLs and script activity
- Context Persistence: Maintains user and project context throughout proxy session
- Activity Monitoring: Tracks user interaction with script interfaces
- Timeout Management: Configurable timeout for inactive proxy sessions
Graceful Cleanup
- Session End Detection: Monitors for user navigation away from script interface
- Context Cleanup: Removes script-specific context and temporary resources
- Memory Management: Cleans up script routes and session data
- Resource Release: Frees workspace resources and closes script processes
Configuration Options
{
"proxy_system": {
"session_timeout": 1800,
"context_cleanup_interval": 300,
"context_cleanup_interval": 300,
"max_concurrent_sessions": 20,
"max_sessions_per_user": 5,
"session_activity_check": true,
"session_check_retries": 3
},
"data_management": {
"base_data_path": "./data",
"auto_backup": true,
"backup_interval_hours": 24,
"max_backup_versions": 30,
"compress_backups": true,
"backup_schedule_time": "02:00"
},
"documentation": {
"markdown_extensions": ["codehilite", "tables", "toc", "math"],
"supported_languages": ["en", "es", "it", "fr"],
"default_language": "en",
"enable_math_rendering": true,
"enable_diagram_rendering": false
},
"tagging": {
"max_tags_per_script": 20,
"max_tag_length": 30,
"allowed_tag_chars": "alphanumeric_underscore_dash",
"enable_tag_suggestions": true
},
"multi_user": {
"max_projects_per_user": 50,
"default_project_name": "project_default",
"auto_create_default_project": true,
"user_data_isolation": true
},
"security": {
"data_directory_permissions": "755",
"config_file_permissions": "644",
"enable_project_sharing": false,
"admin_can_access_all_data": true
}
}
Script Metadata Management
Header Parsing Rules
- First Discovery: Automatically extracts metadata from script docstring/comments
- Precedence: User-edited metadata takes precedence over header-parsed data
- Re-parsing: Manual refresh option to update from modified headers
- Validation: Validates required_level values against allowed user levels
Editable Fields (Developer+ Only)
- Description: Multi-language script descriptions (short)
- Long Description: Multi-language Markdown documentation (long)
- Required Level: Minimum user level required for script execution/viewing
- Category: Script categorization for filtering and organization
- System Tags: Script tags for classification and searching
- Parameters: Script parameter definitions and validation rules
- Execution Settings: Timeout, conda environment, interface settings
Documentation Management
- Markdown Files: Automatic creation and management of documentation files
- Language Versions: Support for multiple language versions of documentation
- Template Generation: Auto-generate documentation templates for new scripts
- Content Validation: Basic validation of Markdown syntax and structure
Security Considerations
1. Authentication & Authorization
- Secure password hashing (bcrypt)
- Session management with timeout
- CSRF protection
- Input validation and sanitization
2. Script Execution Security
- Sandboxed execution environment
- Resource limits (CPU, memory, time)
- Whitelist of allowed script locations
- Parameter validation and escaping
- Web interface port isolation
- Process monitoring and automatic cleanup
3. Access Control
- Role-based access control (RBAC)
- Principle of least privilege
- Audit logging for all actions
- Secure file path handling
- Metadata editing permissions (developer+ only)
- Web interface session security
Installation & Deployment
Requirements
- Python 3.12+ (minimum required version)
- Operating System: Linux (primary) with Windows support
- Conda/Miniconda: Required for environment management
- WebSocket support: For real-time log streaming
Database Engine
PostgreSQL (Recommended for professional deployment)
- Rationale:
- Production-ready RDBMS with ACID compliance
- Better concurrent access handling for multi-user environments
- Advanced features: JSON columns, full-text search, indexing
- Horizontal scaling capabilities for future growth
- Robust backup and recovery mechanisms
- Docker containerization: Isolated database service with persistent volumes
- Development/Production parity: Same database engine across environments
- Connection pooling: Built-in support for connection management
- Migration support: Easy schema upgrades and data migrations
Alternative: SQLite (For lightweight deployments)
- Use case: Single-user or small team environments
- Zero-configuration: Suitable for quick development setup
- File-based storage: Simplified deployment for simple use cases
- Limitation: Limited concurrent access and scalability
Python Dependencies
# Core Framework
flask>=3.0.0
flask-sqlalchemy>=3.1.0
flask-login>=0.6.0
flask-wtf>=1.2.0
flask-socketio>=5.3.0
# Database
psycopg2-binary>=2.9.7 # PostgreSQL adapter for Python
SQLAlchemy>=2.0.16 # ORM with PostgreSQL support
# Web Server
gunicorn>=21.2.0 # Production WSGI server
eventlet>=0.33.0 # WebSocket support
# Conda Environment Management
conda-pack>=0.7.0 # Conda environment utilities
subprocess32>=3.5.4 # Enhanced subprocess handling
# Markdown Processing
markdown>=3.5.0
markdown-extensions>=0.1.0
pygments>=2.16.0 # Syntax highlighting
# File Management & Compression
watchdog>=3.0.0 # File system monitoring
zipfile36>=0.1.0 # Enhanced zip functionality
# Utilities
pyyaml>=6.0.1 # YAML configuration support
python-dateutil>=2.8.2 # Date/time utilities
psutil>=5.9.0 # Process management
requests>=2.31.0 # HTTP client for health checks
# Development & Testing (optional)
pytest>=7.4.0
pytest-flask>=1.3.0
black>=23.9.0 # Code formatting
flake8>=6.1.0 # Code linting
Docker Multi-Container Architecture
The application uses a multi-container Docker setup with host networking for better script interface accessibility:
Container Structure
# docker-compose.yml
services:
# PostgreSQL Database Container (Bridge Network)
postgres:
image: postgres:15-alpine
container_name: scriptsmanager_postgres
environment:
POSTGRES_DB: scriptsmanager
POSTGRES_USER: scriptsmanager
POSTGRES_PASSWORD: scriptsmanager_dev_password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./sql:/docker-entrypoint-initdb.d
ports:
- "5432:5432" # Port mapping for database access
healthcheck:
test: ["CMD-SHELL", "pg_isready -U scriptsmanager"]
interval: 10s
timeout: 5s
retries: 5
# Application Container (Production) - Bridge Network
scriptsmanager:
build: .
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@postgres:5432/scriptsmanager
- DEBUG=false
ports:
- "5002:5002" # Main application port
depends_on:
postgres:
condition: service_healthy
volumes:
- ./data:/app/data
- ./backup:/app/backup
- ./logs:/app/logs
- ./app/backend/script_groups:/app/app/backend/script_groups
networks:
- scriptsmanager_network
# Application Container (Development) - Bridge Network
scriptsmanager-dev:
build: .
environment:
- DATABASE_URL=postgresql://scriptsmanager:scriptsmanager_dev_password@postgres:5432/scriptsmanager
- DEBUG=true
ports:
- "5002:5002" # Main application port
depends_on:
postgres:
condition: service_healthy
volumes:
- .:/app # Hot reload - entire codebase mounted
- ./backup:/app/backup
- ./logs:/app/logs
networks:
- scriptsmanager_network
networks:
scriptsmanager_network:
driver: bridge
volumes:
postgres_data:
driver: local
Benefits of Proxy System Architecture
- Simplified Script Integration: Scripts integrate directly with main Flask application
- Single Port Management: Only one external port (5002) needed for all access
- Better Security: Unified authentication and access control through proxy system
- Standard Docker Networking: Uses bridge networking for better container isolation
- Easier Deployment: No need for host networking or dynamic port ranges
- Database Isolation: PostgreSQL remains isolated in bridge network for security
- Unified Session Management: All script access through centralized proxy system
- Simplified Configuration: Standard Docker networking without host mode complexity
# Create Python 3.12+ virtual environment
python3.12 -m venv scriptsmanager_env
source scriptsmanager_env/bin/activate # Linux
# scriptsmanager_env\Scripts\activate # Windows
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
# Initialize database
python init_db.py
# Create initial admin user
python create_admin.py --username admin --password <secure_password>
# Start application
python app.py
Database Setup
# Database initialization with SQLite
import sqlite3
import os
from datetime import datetime
def initialize_database(db_path="data/scriptsmanager.db"):
"""Initialize SQLite database with required tables"""
# Ensure data directory exists
os.makedirs(os.path.dirname(db_path), exist_ok=True)
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Create all tables from schema
with open('sql/create_tables.sql', 'r') as f:
schema_sql = f.read()
cursor.executescript(schema_sql)
# Create default admin user
admin_password = hash_password("admin123") # Change in production
cursor.execute("""
INSERT OR IGNORE INTO users
(username, email, password_hash, user_level, is_active)
VALUES (?, ?, ?, ?, ?)
""", ("admin", "admin@localhost", admin_password, "admin", True))
conn.commit()
conn.close()
print(f"Database initialized at: {db_path}")
if __name__ == "__main__":
initialize_database()
Configuration Management
# config/app_config.py
import os
from pathlib import Path
class Config:
# Database Configuration
DATABASE_URL = os.getenv('DATABASE_URL', 'sqlite:///data/scriptsmanager.db')
# Application Settings
SECRET_KEY = os.getenv('SECRET_KEY', 'your-secret-key-change-in-production')
DEBUG = os.getenv('DEBUG', 'False').lower() == 'true'
# Multi-user Settings
BASE_DATA_PATH = Path(os.getenv('BASE_DATA_PATH', './data'))
MAX_PROJECTS_PER_USER = int(os.getenv('MAX_PROJECTS_PER_USER', '50'))
# Proxy System Settings
PROXY_SESSION_TIMEOUT = int(os.getenv('PROXY_SESSION_TIMEOUT', '1800'))
MAX_CONCURRENT_SESSIONS = int(os.getenv('MAX_CONCURRENT_SESSIONS', '20'))
MAX_SESSIONS_PER_USER = int(os.getenv('MAX_SESSIONS_PER_USER', '5'))
CONTEXT_CLEANUP_INTERVAL = int(os.getenv('CONTEXT_CLEANUP_INTERVAL', '300'))
# Backup Configuration
BACKUP_ENABLED = os.getenv('BACKUP_ENABLED', 'True').lower() == 'true'
BACKUP_SCHEDULE_TIME = os.getenv('BACKUP_SCHEDULE_TIME', '02:00')
BACKUP_RETENTION_DAYS = int(os.getenv('BACKUP_RETENTION_DAYS', '30'))
# Conda Environment
CONDA_AUTO_DETECT = os.getenv('CONDA_AUTO_DETECT', 'True').lower() == 'true'
# Supported Languages
SUPPORTED_LANGUAGES = ['en', 'es', 'it', 'fr']
DEFAULT_LANGUAGE = os.getenv('DEFAULT_LANGUAGE', 'en')
Conda Environment Detection
# services/conda_service.py
import subprocess
import json
import os
from pathlib import Path
class CondaService:
def __init__(self):
self.conda_executable = self.find_conda_executable()
def find_conda_executable(self):
"""Find conda executable on Windows/Linux"""
possible_paths = [
'conda',
'/opt/conda/bin/conda',
'/usr/local/bin/conda',
os.path.expanduser('~/miniconda3/bin/conda'),
os.path.expanduser('~/anaconda3/bin/conda'),
# Windows paths
r'C:\ProgramData\Miniconda3\Scripts\conda.exe',
r'C:\ProgramData\Anaconda3\Scripts\conda.exe',
os.path.expanduser(r'~\Miniconda3\Scripts\conda.exe'),
os.path.expanduser(r'~\Anaconda3\Scripts\conda.exe'),
]
for path in possible_paths:
try:
result = subprocess.run([path, '--version'],
capture_output=True, text=True, timeout=10)
if result.returncode == 0:
return path
except (FileNotFoundError, subprocess.TimeoutExpired):
continue
raise RuntimeError("Conda executable not found. Please install Miniconda or Anaconda.")
def list_environments(self):
"""List all available conda environments"""
try:
result = subprocess.run([self.conda_executable, 'env', 'list', '--json'],
capture_output=True, text=True, timeout=30)
if result.returncode == 0:
env_data = json.loads(result.stdout)
return env_data.get('envs', [])
except Exception as e:
print(f"Error listing conda environments: {e}")
### Production Deployment
```bash
# Production deployment with Gunicorn
gunicorn --bind 0.0.0.0:8000 \
--workers 4 \
--worker-class eventlet \
--timeout 300 \
--keep-alive 30 \
--access-logfile logs/access.log \
--error-logfile logs/error.log \
app:app
# Systemd service file (Linux)
# /etc/systemd/system/scriptsmanager.service
[Unit]
Description=ScriptsManager Web Application
After=network.target
[Service]
Type=simple
User=scriptsmanager
Group=scriptsmanager
WorkingDirectory=/opt/scriptsmanager
Environment=PATH=/opt/scriptsmanager/venv/bin
ExecStart=/opt/scriptsmanager/venv/bin/gunicorn --config gunicorn.conf.py app:app
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Development Environment
Docker Development Stack
The application provides a complete Docker-based development environment with hot-reload capabilities:
# Start development environment
./docker-manage.sh start-dev
# Stop development environment
./docker-manage.sh stop-dev
# Check logs
./docker-manage.sh logs-dev
# Rebuild development image
./docker-manage.sh build-dev
Development Features
- Hot Reload: Code changes automatically reflected without rebuilds
- Database Persistence: PostgreSQL data survives container restarts
- Debug Support: VS Code debugging through remote containers
- Port Forwarding: Application accessible at localhost:5003
- Conda Environments:
scriptsmanager
: Main Flask applicationtsnet
: Scientific computing and analysis tools
- Volume Mounts:
- Source code: Live editing with hot reload
- Data directory: Persistent script storage
- Logs: Development debugging
- Backup: Development backup testing
Local Development Setup (Alternative)
For developers preferring local execution:
- Install conda and create environments:
conda env create -f conda-environments.yml
conda activate scriptsmanager
- Setup PostgreSQL locally:
# Install PostgreSQL
sudo apt install postgresql postgresql-contrib
# Create database and user
sudo -u postgres psql
CREATE DATABASE scriptsmanager;
CREATE USER scriptsmanager WITH PASSWORD 'dev_password';
GRANT ALL PRIVILEGES ON DATABASE scriptsmanager TO scriptsmanager;
- Configure environment:
export DATABASE_URL="postgresql://scriptsmanager:dev_password@localhost:5432/scriptsmanager"
export DEBUG=true
- Run application:
python scripts/run_app.py
VS Code Integration
The project includes VS Code workspace configuration for:
- Remote container development
- Python debugging with breakpoints
- Integrated terminal with conda environments
- Docker container management
- PostgreSQL database browser extensions
Cross-Platform Considerations
- Path Handling: Use
pathlib.Path
for cross-platform file operations - Process Management: Platform-specific conda activation commands
- Service Installation: Different approaches for Linux (systemd) vs Windows (Windows Service)
- File Permissions: Appropriate permission handling for each OS
- Environment Variables: Platform-specific environment variable handling
Monitoring & Health Checks
# Health check endpoint
@app.route('/health')
def health_check():
return {
'status': 'healthy',
'timestamp': datetime.utcnow().isoformat(),
'database': check_database_connection(),
'conda': check_conda_availability(),
'active_scripts': get_active_script_count(),
'proxy_sessions': get_proxy_session_stats()
}