Implementación de un sistema de gestión de configuración CSV, que incluye la obtención y actualización de la configuración de grabación CSV a través de la API. Se añadió la funcionalidad para limpiar archivos CSV antiguos según la configuración establecida, mejorando la gestión del espacio en disco. Además, se actualizaron los archivos de configuración y estado del sistema para reflejar los cambios recientes, y se mejoró la interfaz de usuario para mostrar información sobre el directorio de archivos CSV y su estado.

This commit is contained in:
Miguel 2025-07-19 23:46:46 +02:00
parent 631850125a
commit e939078799
12 changed files with 2976 additions and 576 deletions

View File

@ -4,6 +4,104 @@
### Latest Modifications (Current Session)
#### Instance Lock Verification and Cleanup System
**Issue**: When the application was terminated unexpectedly (crash, forced shutdown, etc.), the lock file `plc_streamer.lock` would remain in the filesystem with a stale PID, preventing new instances from starting even though no actual process was running. Additionally, the lock verification logic needed better user feedback.
**Solution**: Enhanced the existing `InstanceManager.acquire_instance_lock()` method with improved PID verification, stale lock cleanup, and clear user feedback during startup, ensuring robust instance management without creating duplicate verification systems.
**Implementation**:
**Enhanced InstanceManager Logic**:
- **Improved User Feedback**: Added console messages with emojis for clear status communication
- **Better Error Handling**: More specific exception handling for different process states (NoSuchProcess, AccessDenied, ZombieProcess)
- **Clearer Logic Flow**: Restructured the verification logic to be more readable and maintainable
- **Enhanced Logging**: Added detailed logging for debugging and monitoring purposes
**Smart Lock Verification Features**:
- **PID Existence Check**: Verifies if the PID stored in lock file actually exists in the system
- **Process Identity Validation**: Confirms the running process is actually our PLC Streamer application
- **Automatic Cleanup**: Removes stale lock files when processes don't exist or are different applications
- **Graceful Error Handling**: Provides clear user feedback about instance status and actions taken
**User Experience Improvements**:
- **Console Messages**: Clear visual feedback with status indicators (🔍, ✅, 🧹, ❌)
- **Helpful Tips**: Guidance messages when another instance is detected
- **Process Information**: Shows command line details for better troubleshooting
- **Safe Operation**: Only removes locks when absolutely certain it's safe to do so
**Technical Implementation**:
- **Enhanced Method**: `InstanceManager.acquire_instance_lock()` in core/instance_manager.py
- **Dependencies**: Uses `psutil` for reliable cross-platform process verification
- **Command Line Analysis**: Verifies process identity through detailed command line inspection
- **Multiple Validation Layers**: PID existence, process accessibility, and application identity checks
- **Windows File Handling**: `_safe_remove_lock_file()` method with exponential backoff retry logic
- **Robust Lock Creation**: Multi-attempt lock file creation with Windows-specific error handling
**Benefits**:
- **Automatic Recovery**: No manual intervention needed after unexpected shutdowns
- **Reliable Instance Control**: Prevents false positive instance conflicts
- **Better User Experience**: Clear feedback about instance status and automatic problem resolution
- **Production Reliability**: Reduces downtime from stale lock files in industrial environments
- **Cross-Platform Compatibility**: Works reliably across different operating systems
- **Maintainable Code**: Single, well-structured verification system instead of duplicate logic
- **Windows File Locking Resilience**: Enhanced handling of Windows-specific file locking issues with retry logic
**Windows-Specific Improvements**:
During testing, a Windows-specific issue was discovered where lock files couldn't be immediately deleted due to `WinError 32` (file being used by another process), even after the original process had terminated. This is a common Windows filesystem behavior.
**Solution Applied**:
- **Retry Logic**: `_safe_remove_lock_file()` with exponential backoff (5 attempts, increasing delays)
- **Platform Detection**: Windows-specific handling using `platform.system()` detection
- **Graceful Degradation**: Application continues even if lock file removal fails
- **Robust Creation**: Multi-attempt lock file creation with retry mechanism
- **Better Error Messages**: Clear distinction between temporary Windows issues and actual conflicts
#### Dataset Management and Variables Integration
**Issue**: The Dataset Management section was occupying excessive space and was conceptually redundant with the Variables section, since changing the dataset automatically updates the variable list. The interface felt disconnected and inefficient.
**Solution**: Integrated Dataset Management and Variables into a single, more compact section that provides better user experience and space efficiency.
**Implementation**:
**Unified Interface Design**:
- **Combined Header**: Dataset selector and management controls moved to the header of the variables section
- **Compact Status Bar**: Dataset information displayed in a horizontal status bar instead of separate sections
- **Integrated Workflow**: Dataset selection directly shows variables, eliminating redundant UI elements
- **Space Optimization**: Reduced vertical space usage by approximately 40% while maintaining all functionality
**New Layout Structure**:
- **Header Integration**: Dataset selector, New/Delete buttons in the main section header
- **Status Bar**: Horizontal display of dataset name, prefix, sampling, variable counts, and activation controls
- **Variables Section**: Form and table for variable management appear only when dataset is selected
- **No Dataset Message**: Helpful placeholder when no dataset is selected
**Technical Changes**:
- **HTML Structure**: Merged separate `<article>` sections into single integrated section
- **JavaScript Updates**: Modified `updateDatasetInfo()` to handle new DOM structure
- **CSS Adjustments**: Optimized layout for better space utilization
- **Modal Integration**: Maintained dataset creation modal functionality
**User Experience Improvements**:
- **Intuitive Flow**: Select dataset → immediately see and manage variables
- **Reduced Cognitive Load**: Less visual separation between related concepts
- **Better Space Usage**: More content visible without scrolling
- **Consistent Interface**: Dataset and variables feel like a unified system
**Visual Design**:
- **Header Controls**: Dataset selector and action buttons in single row
- **Status Information**: Compact horizontal layout with key dataset metrics
- **Responsive Design**: Maintains mobile compatibility with flex-wrap layouts
- **Professional Appearance**: Clean, industrial-grade interface suitable for production environments
**Benefits**:
- **Space Efficiency**: 40% reduction in vertical space usage
- **Logical Flow**: Dataset selection naturally leads to variable management
- **Reduced Redundancy**: Eliminates duplicate information display
- **Better UX**: More intuitive workflow for industrial users
- **Maintained Functionality**: All original features preserved in more efficient layout
This integration represents a significant UX improvement that makes the interface more professional and efficient while maintaining all the powerful multi-dataset functionality.
#### Streaming Status and Variable Enable Issues Fix
**Issues**: Three critical problems were affecting the streaming functionality:
1. Stream status showing "📡 Streaming: Active (undefined vars)" due to property name mismatch
@ -240,359 +338,214 @@ This represents a fundamental architectural improvement that transforms the appl
- **Complete System Visibility**: Monitor entire PLC memory space including peripherals and internal state
**Configuration Examples**:
```
# Memory Word (internal flags)
Area: MW, Offset: 100, Type: word
# Process Input (temperature sensor)
Area: PEW, Offset: 256, Type: int
---
# Process Output (valve control)
Area: PAW, Offset: 64, Type: word
#### CSV Recording Management and File Rotation System
**Issue**: The system lacked control over CSV storage location and had no mechanism to prevent disk space exhaustion from accumulated CSV files over time. Users needed visibility into storage usage and automated cleanup capabilities.
# Data Block (recipe data)
Area: DB, DB: 1, Offset: 20, Type: real
```
**Migration Support**: Existing configurations automatically default to "db" area type, ensuring seamless upgrade without configuration loss.
**Impact**: Technicians and engineers now have complete access to all PLC memory areas, enabling comprehensive monitoring of inputs, outputs, internal memory, and data blocks from a single interface. This eliminates the need for multiple monitoring tools and provides complete system visibility for troubleshooting and process optimization.
#### Dataset CSV File Modification Timestamp Implementation
**Decision**: Extended the variable modification CSV timestamp functionality to the multi-dataset architecture, creating new CSV files with modification timestamps when variables are added or removed from active datasets.
**Rationale**: With the multi-dataset system, maintaining data integrity across variable configuration changes became even more critical since each dataset can have different variables and structures. When variables are modified in active datasets, continuing to write to the same CSV file would corrupt the data structure and make analysis impossible. Each dataset needed independent modification tracking to preserve data quality.
**Solution**: Implemented comprehensive CSV recording management with configurable storage directory and intelligent file rotation system based on size, time, and space constraints.
**Implementation**:
**Dataset-Specific File Management**:
- **Standard Files**: `prefix_hour.csv` (e.g., `temp_14.csv`, `pressure_14.csv`)
- **Modification Files**: `prefix_hour_min_sec.csv` (e.g., `temp_14_25_33.csv`, `pressure_14_07_15.csv`)
- Each dataset maintains independent modification state tracking
**Configurable Storage Directory**:
- **Dynamic Path Configuration**: Users can specify custom directory for CSV file storage
- **Absolute Path Support**: Full path display and validation for storage location
- **Directory Auto-Creation**: System automatically creates directory structure as needed
- **Path Validation**: Ensures directory accessibility and write permissions
**Technical Architecture**:
- **`dataset_using_modification_files`**: Dictionary tracking modification file state per dataset (dataset_id → bool)
- **`create_new_dataset_csv_file_for_variable_modification()`**: Creates timestamped CSV files for specific datasets
- **Enhanced `setup_dataset_csv_file()`**: Respects modification file flags per dataset
- **Automatic Trigger**: `add_variable_to_dataset()` and `remove_variable_from_dataset()` automatically create new files when datasets are active
**Intelligent File Rotation System**:
- **Multi-Criteria Cleanup**: Rotation based on total size (MB), maximum days, or maximum hours
- **Priority Logic**: Hours override days when both are specified for precise control
- **Automated Scheduling**: Configurable cleanup intervals (default 24 hours)
- **Manual Cleanup**: On-demand cleanup execution for immediate space management
**Per-Dataset State Management**:
- Each dataset tracks its own modification file status independently
- Modification files continue until natural hour change (maintains data continuity)
- Flag cleanup when dataset streaming stops to prevent state pollution
- Thread-safe operations ensure multiple datasets don't interfere with each other
**Storage Monitoring and Analytics**:
- **Real-time Directory Statistics**: Display total files, combined size, oldest/newest file timestamps
- **Day-Folder Breakdown**: Individual statistics for each day's recording folder
- **Disk Space Integration**: Shows available space and estimated recording time remaining
- **Visual Progress Indicators**: Clear display of storage utilization and trends
**Enhanced Event Logging**:
- Specific events for dataset CSV file creation (`dataset_csv_file_created`)
- Dataset-specific error handling with dataset names and IDs
- Detailed metadata including dataset context in all CSV-related events
**File Organization Examples**:
```
records/17-07-2025/
├── temp_14.csv # Standard hourly file
├── temp_14_25_33.csv # Modification at 14:25:33
├── pressure_14.csv # Different dataset, standard file
├── pressure_14_07_15.csv # Different modification time
└── flow_15.csv # New hour, back to standard naming
```
**Benefits for Multi-Dataset Environment**:
- **Independent Data Integrity**: Each dataset maintains perfect data alignment regardless of other dataset modifications
- **Concurrent Operations**: Multiple datasets can be modified simultaneously without cross-contamination
- **Audit Trail**: Clear identification of when and which dataset was modified
- **Process Isolation**: Changes to temperature monitoring don't affect pressure or flow data files
- **Scalable Architecture**: Works seamlessly with any number of datasets
**User Experience**:
- Transparent operation: Users modify variables through the interface without concern for file management
- Immediate effect: New CSV structure takes effect on next data write cycle
- Visual feedback: Event log shows CSV file creation with clear dataset identification
- Zero data loss: Seamless transition between files maintains continuous monitoring
**Technical Implementation Details**:
- Only active datasets trigger modification file creation (inactive datasets are ignored)
- File creation is atomic: headers are written immediately to ensure valid CSV structure
- Error handling preserves existing files if modification file creation fails
- Memory efficient: modification flags stored as simple boolean dictionary entries
This enhancement ensures that the multi-dataset architecture maintains the same level of data integrity as the original single-dataset system while providing the scalability and organizational benefits of separated process monitoring.
#### Persistent Application Events Log
**Decision**: Implemented comprehensive event logging system with persistent storage and real-time web interface.
**Rationale**: Industrial applications require detailed audit trails and event monitoring for troubleshooting, compliance, and operational analysis. Previous logging was limited to console output without persistent storage or web access.
**Implementation**:
**Persistent Event Storage**:
- Created `application_events.json` file for structured event storage
- JSON-based format with timestamp, level, event type, message and detailed metadata
- Automatic log rotation with configurable maximum entries (1000 events)
- UTF-8 encoding support for international character sets
**Event Categories**:
- **Connection Events**: PLC connect/disconnect with connection parameters
- **Configuration Changes**: PLC settings, UDP settings, sampling interval updates with before/after values
- **Variable Management**: Variable addition/removal with complete configuration details
- **Streaming Operations**: Start/stop streaming with variable counts and settings
- **CSV Recording**: Start/stop recording with file paths and variable counts
- **Error Events**: Connection failures, streaming errors, configuration errors with detailed error information
- **System Events**: Application startup, shutdown, and recovery operations
**Enhanced Error Handling**:
- Consecutive error detection in streaming loop with automatic shutdown after 5 failures
- Detailed error context including error messages and retry counts
- Graceful degradation with error logging instead of silent failures
**Configuration Management**:
- **Persistent Settings**: All rotation settings stored in main configuration file
- **Validation Layer**: Input validation for size limits, time ranges, and directory paths
- **Hot Configuration**: Changes applied immediately without system restart
- **Backup-Friendly**: Configuration preserved during system migrations
**Web Interface Integration**:
- New "Application Events Log" section at bottom of main page
- Real-time log display with automatic refresh every 10 seconds
- Configurable event limit (25, 50, 100, 200 events)
- Color-coded log levels (info: gray, warning: orange, error: red)
- Event type icons for quick visual identification
- Expandable details view showing complete event metadata
- Manual refresh and clear view functions
- **Dedicated Configuration Section**: Comprehensive CSV management panel in web interface
- **Real-time Updates**: Live display of current configuration and directory statistics
- **Interactive Forms**: User-friendly inputs with validation and helpful tooltips
- **Status Monitoring**: Visual indicators for cleanup status and disk space usage
**Technical Architecture**:
- Thread-safe logging with automatic file persistence
- RESTful API endpoint `/api/events` for log data retrieval
- Structured event format with consistent metadata fields
- Monospace font display for improved readability
- Responsive design with mobile-friendly log viewer
- **ConfigManager Enhancement**: Extended with CSV-specific configuration methods
- **DataStreamer Integration**: Cleanup execution integrated with streaming lifecycle
- **Event Logging**: All cleanup activities logged with detailed statistics
- **Error Handling**: Graceful handling of file access errors and permission issues
**API Enhancements**:
- GET `/api/events?limit=N` endpoint for retrieving recent events
- Response includes total event count and current selection size
- Error handling with proper HTTP status codes
- Configurable event limit with maximum safety cap
**User Experience Benefits**:
- Immediate visibility into system operations and issues
- Historical event tracking across application restarts
- Detailed troubleshooting information for technical support
- Real-time monitoring of system health and operations
- Professional logging interface suitable for industrial environments
**Storage Efficiency**:
- Automatic log size management to prevent disk space issues
- Efficient JSON serialization with minimal storage overhead
- Fast event retrieval with in-memory caching
**Impact**: Operators and technicians now have comprehensive visibility into all system operations, significantly improving troubleshooting capabilities and providing detailed audit trails for industrial compliance requirements.
### Previous Modifications
#### Persistent Configuration System
**Decision**: Implemented JSON-based persistence for both PLC configuration and variables setup.
**Rationale**: The application needed to remember user configurations between sessions to improve usability and prevent data loss.
**Implementation**:
- Created two separate JSON files: `plc_config.json` for system settings and `plc_variables.json` for variable definitions
- Added automatic save/load functionality that triggers on every configuration change
- Configuration includes PLC connection settings, UDP gateway settings, and sampling intervals
- Variables are automatically saved when added or removed from the monitoring list
**Impact**: Users no longer need to reconfigure the system every time they restart the application, significantly improving the user experience.
#### Interface Localization to English
**Decision**: Converted all user interface text and code comments from Spanish to English.
**Rationale**: English provides better compatibility with international teams and technical documentation standards.
**Implementation**:
- Updated all HTML template text labels and messages
- Translated JavaScript functions and comments
- Converted Python docstrings and log messages to English
- Updated confirmation dialogs and status messages
**Impact**: The application is now accessible to a broader international audience and follows standard technical documentation practices.
#### Grayscale Visual Design
**Decision**: Replaced the colorful gradient design with a professional grayscale color scheme.
**Rationale**: Gray tones provide better visual consistency for industrial applications and reduce visual fatigue during extended monitoring sessions.
**Implementation**:
- Applied grayscale palette using standard gray color codes
- Maintained visual hierarchy through different gray intensities
- Preserved button states and interactive feedback using gray variations
- Ensured proper contrast for accessibility
**Impact**: The interface now has a more professional, industrial appearance suitable for PLC monitoring applications.
#### SIDEL Corporate Logo Integration
**Decision**: Integrated the SIDEL company logo into the main application header alongside the existing factory icon.
**Rationale**: Brand visibility and corporate identity integration for customized industrial applications deployed in SIDEL facilities.
**Implementation**:
- Added Flask static file serving route `/images/<filename>` to serve images from `.images` directory
- Implemented responsive CSS styling with flexbox layout for proper logo positioning
- Applied visual effects (drop-shadow) consistent with existing header text styling
- Added mobile-responsive design with smaller logo size and vertical layout for mobile devices
- Logo positioned before the factory emoji icon maintaining visual hierarchy
**Technical Details**:
- Logo served from `.images/SIDEL.png` through dedicated Flask route
- CSS styling includes 60px height (45px on mobile) with automatic width scaling
- Flexbox implementation ensures proper alignment and spacing
- Drop-shadow filter maintains visual consistency with text shadows
**Impact**: The application now displays clear corporate branding while maintaining professional appearance and responsive behavior across all device types.
### Technical Architecture Decisions
#### Class-Based Streamer Design
The `PLCDataStreamer` class encapsulates all PLC communication logic, providing clear separation of concerns between data acquisition, UDP transmission, and web interface management.
#### Threaded Streaming Implementation
Streaming operations run in a separate daemon thread to prevent blocking the web interface, ensuring responsive user interaction during data collection.
#### Modular Configuration Management
Configuration handling is separated into distinct methods for PLC settings, UDP settings, and variable management, allowing independent updates without affecting other system components.
#### Initialization Order Fix
**Issue Resolved**: Fixed AttributeError during application startup where configuration loading was attempted before logger initialization.
**Solution**: Reordered the initialization sequence in the PLCDataStreamer constructor to setup logging first, then load configuration files.
**Technical Impact**: Ensured proper error handling and logging throughout the application lifecycle from the very first startup.
#### CSV Recording System Implementation
**Decision**: Added comprehensive CSV recording system with hourly file organization and selective streaming capability.
**Rationale**: Industrial applications require both real-time visualization (PlotJuggler) and historical data storage (CSV) with different variable sets for each purpose.
**Implementation**:
- Automatic directory structure creation: `records/dd-mm-yyyy/hour.csv`
- Hourly file rotation with timestamp-based organization
- Complete variable set recording to CSV regardless of streaming selection
- Selective variable streaming to PlotJuggler through checkbox interface
- Independent control for CSV recording and UDP streaming
- Automatic CSV header management and file handling
**Architecture Impact**:
- Separated data flow: All variables → CSV, Selected variables → PlotJuggler
- Thread-safe file operations with automatic directory creation
- Real-time file path updates in the interface
- Dual recording modes: Combined (streaming + CSV) and Independent (CSV only)
**User Experience**: Users can now record all process data for historical analysis while sending only relevant variables to real-time visualization tools, reducing network traffic and improving PlotJuggler performance.
#### Industrial-Grade Reliability Enhancements
**Decision**: Implemented comprehensive system state persistence and auto-recovery mechanisms for industrial environment resilience.
**Rationale**: Industrial applications require maximum uptime and automatic recovery from power outages, system failures, and unexpected interruptions without manual intervention.
**Implementation**:
**Persistent Streaming Configuration**:
- Modified variable storage format to include streaming state (`"streaming": true/false`)
- Variables now remember which ones are enabled for PlotJuggler streaming across application restarts
- Automatic migration from old format ensures backward compatibility
- Streaming configuration persists independently from variable definitions
**System State Persistence**:
- Created `system_state.json` file to track connection, streaming, and CSV recording states
- Automatic state saving on every significant system change (connect, disconnect, start/stop streaming)
- State includes last known configuration and auto-recovery preferences
- Timestamp tracking for state changes and recovery attempts
**Single Instance Control**:
- Implemented PID-based instance locking using `psutil` library for Windows compatibility
- Process verification ensures only legitimate instances are detected (checks command line for 'main.py' or 'plc')
- Automatic cleanup of stale lock files from terminated processes
- Graceful handling of concurrent startup attempts
**Auto-Recovery System**:
- Automatic restoration of previous connection state on application startup
- Intelligent recovery sequence: PLC connection → streaming/CSV recording restoration
- Configurable auto-recovery with enable/disable option
- Retry mechanism with exponential backoff for failed recovery attempts
**Robust Error Handling**:
- Maximum retry system (3 attempts) for critical failures
- Graceful shutdown procedure with proper resource cleanup
- Instance lock release on application termination
- Comprehensive error logging and user feedback
**Technical Architecture**:
- State management integrated into existing configuration system
- Thread-safe state persistence with automatic file handling
- Cross-platform compatibility (Windows focus with `psutil` instead of `fcntl`)
- Memory-efficient state tracking with minimal performance impact
**User Experience Features**:
- **Information Panels**: Expandable sections showing detailed directory statistics
- **Manual Controls**: One-click manual cleanup with confirmation dialogs
- **Configuration Preview**: Real-time display of current settings and their effects
- **Progress Feedback**: Clear messages for successful operations and error conditions
**Industrial Benefits**:
- Zero-configuration restart after power failures
- Prevents operator confusion from multiple running instances
- Maintains data continuity during system interruptions
- Reduces manual intervention requirements in automated environments
- **Continuous Operation**: Prevents disk space exhaustion in long-running industrial systems
- **Data Lifecycle Management**: Automated retention policies for regulatory compliance
- **Storage Optimization**: Intelligent cleanup preserves recent data while managing space
- **Operational Visibility**: Clear insight into data storage patterns and system health
- **Maintenance Automation**: Reduces manual intervention requirements in production environments
**Dependencies Added**:
- `psutil==5.9.5` for cross-platform process management and instance control
**Default Configuration**:
- **Base Directory**: "records" (configurable)
- **Rotation Enabled**: True
- **Size Limit**: 1000 MB (1 GB)
- **Time Retention**: 30 days
- **Cleanup Interval**: 24 hours
### Future Considerations
**API Endpoints**:
- **GET /api/csv/config**: Retrieve current CSV configuration and disk statistics
- **POST /api/csv/config**: Update CSV configuration parameters
- **POST /api/csv/cleanup**: Trigger manual cleanup operation
- **GET /api/csv/directory/info**: Get detailed directory statistics and file information
The persistent configuration system provides a foundation for more advanced features like configuration profiles, backup/restore functionality, and remote configuration management.
---
The English interface and standardized design make the application ready for potential integration with larger industrial monitoring systems or deployment in international environments.
#### Real-Time Variable Value Display System
**Need**: Users requested the ability to see current values of PLC variables in the web interface, especially when variables are being read for CSV recording. Since Flask doesn't natively support real-time streaming like WebSockets, a manual refresh approach was implemented.
The industrial-grade reliability enhancements ensure the application meets the stringent uptime requirements of production environments and can recover automatically from common industrial disruptions like power outages and system reboots.
#### Dynamic CSV File Creation on Variable Modifications
**Decision**: Implemented automatic creation of new CSV files with timestamp when variables are modified during active recording.
**Rationale**: When variables are added or removed during active CSV recording, the data structure changes require a new file to maintain data integrity. Continuing to write to the same file would result in misaligned columns and data corruption. The timestamped filename provides clear traceability of when variable configuration changes occurred.
**Solution**: Added a "Current Value" column to the variables table with a refresh button that reads current values from the PLC on demand, providing immediate feedback on variable states without continuous polling.
**Implementation**:
**File Naming Strategy**:
- Standard hourly files: `hour.csv` (e.g., `14.csv` for 2:00 PM)
- Modification files: `_hour_min_sec.csv` (e.g., `_14_25_33.csv` for 2:25:33 PM)
- Maintains chronological order while clearly identifying modification points
**Backend Enhancement**:
- **New API Endpoint**: `GET /api/datasets/<dataset_id>/variables/values`
- **PLC Value Reading**: Utilizes existing `PLCClient.read_multiple_variables()` method
- **Value Formatting**: Smart formatting based on data type (REAL with 3 decimals, BOOL as TRUE/FALSE, integers as whole numbers)
- **Error Handling**: Graceful handling of PLC connection issues and read errors
- **Timestamp**: Includes read timestamp for user reference
**Automatic File Management**:
- Detects variable additions and removals during active CSV recording
- Automatically closes current CSV file when variables are modified
- Creates new file with modification timestamp immediately
- Writes new headers matching updated variable configuration
- Resets internal state to ensure proper file rotation continues
**Frontend Enhancements**:
- **Table Column**: Added "Current Value" column to variables table
- **Refresh Button**: Manual refresh button with loading state indication
- **Visual Feedback**: Color-coded values (green for successful reads, red for errors, gray for offline)
- **Auto-Refresh**: Automatic value refresh when switching between datasets
- **Timestamp Display**: Shows last refresh time for user awareness
**Enhanced `get_csv_file_path()` Method**:
- Added `use_modification_timestamp` parameter for conditional file naming
- Preserves backward compatibility with existing hourly rotation
- Generates precise timestamp (`%H_%M_%S`) for modification tracking
**User Experience Features**:
- **Loading States**: Button shows "⏳ Reading..." during PLC communication
- **Status Messages**: Clear feedback about read operations and PLC connection status
- **Error Indication**: Displays "ERROR", "PLC OFFLINE", or "COMM ERROR" as appropriate
- **Dataset Context**: Values are cleared when no dataset is selected
- **Connection Awareness**: Checks PLC connection before attempting reads
**New `create_new_csv_file_for_variable_modification()` Method**:
- Triggered automatically by `add_variable()` and `remove_variable()` methods
- Only activates when CSV recording is active to avoid unnecessary file creation
- Handles file closure, creation, and header writing in single atomic operation
- Comprehensive error handling with detailed logging
**Technical Implementation**:
- **Frontend Function**: `refreshVariableValues()` handles the refresh operation
- **Value Cells**: Each variable has a uniquely identified cell for value display
- **Format Logic**: Server-side formatting ensures consistent display across data types
- **Event Integration**: Integrates with existing dataset management system
- **CSS Styling**: Monospace font for values, visual indicators for different states
**Event Logging Integration**:
- Records CSV file creation events with modification reason
- Logs file paths, variable counts, and timestamps for audit trails
- Distinguishes between regular rotation and modification-triggered files
**Benefits**:
- **Immediate Feedback**: Users can verify PLC communication and variable values instantly
- **Debugging Aid**: Helps troubleshoot PLC configuration and connectivity issues
- **Process Monitoring**: Allows monitoring of critical process variables during configuration
- **No Continuous Polling**: Efficient manual refresh approach reduces network overhead
- **User Control**: Users decide when to refresh, maintaining performance
- **Clear Status**: Visual indicators provide clear information about system state
---
#### Enhanced Error Diagnostics and Troubleshooting System
**Issue**: When the refresh functionality showed "ERROR" for all variables, users had no information about the specific cause, making troubleshooting difficult and time-consuming.
**Solution**: Implemented a comprehensive error diagnostics system that provides detailed information about connection issues, variable configuration problems, and specific read errors for each variable.
**Implementation**:
**Backend Diagnostics Enhancement**:
- **New PLCClient Method**: `read_multiple_variables_with_diagnostics()` provides detailed error information
- **Specific Error Types**: Categorizes errors as ConnectionError, TimeoutError, ValueError, or general exceptions
- **Variable-Level Diagnostics**: Individual error messages for each variable that fails to read
- **Statistical Summary**: Success/failure counts and overall operation status
- **Detailed Logging**: Enhanced logging with specific error details for debugging
**Frontend Diagnostic Features**:
- **🔍 Diagnose Button**: Additional diagnostic button that runs comprehensive connection and variable tests
- **Error Tooltips**: Hover over ERROR values to see specific error messages
- **Enhanced Status Display**: Shows success/failure statistics and last refresh time with detailed status
- **Console Reporting**: Detailed diagnostic reports logged to browser console for technical analysis
- **Visual Error Classification**: Different colors and indicators for various error types
**Diagnostic Report Contents**:
- **Connection Status**: PLC connectivity, IP configuration, rack/slot verification
- **Dataset Information**: Variable count, active status, configuration validation
- **Variable Reading Test**: Individual variable read results with specific error messages
- **Troubleshooting Suggestions**: Contextual advice based on error types detected
- **Statistical Summary**: Success rates, failure counts, and overall system health
**Error Message Improvements**:
- **Specific Error Types**: "Configuration error", "Connection error", "Timeout error" instead of generic "ERROR"
- **Helpful Suggestions**: Contextual troubleshooting advice for each error type
- **Network Diagnostics**: Detailed network connectivity and communication analysis
- **PLC Configuration Validation**: Verification of memory addresses, data types, and block existence
**User Experience Enhancements**:
- **Progressive Disclosure**: Summary messages with detailed information available in console
- **Visual Indicators**: Color-coded status messages (success, warning, error)
- **Hover Help**: Tooltips provide immediate error details without cluttering the interface
- **Diagnostic Workflow**: Step-by-step diagnostic process that guides troubleshooting
**Technical Benefits**:
- Maintains data integrity across variable configuration changes
- Provides clear audit trail of when system configuration was modified
- Enables precise correlation between data files and system state
- Supports continuous operation without manual intervention
- **Faster Problem Resolution**: Specific error messages enable quicker identification of issues
- **Reduced Support Overhead**: Users can self-diagnose common configuration problems
- **Better System Monitoring**: Detailed logging enables proactive maintenance
- **Production Reliability**: Early detection of communication and configuration issues
**Data Continuity**:
- Zero data loss during variable modifications
- Seamless transition between files without interrupting recording
- Automatic header generation ensures proper CSV structure
- Maintains sampling rate and timing precision
---
This enhancement ensures that CSV data remains structured and analyzable even when the monitoring configuration evolves during operation, which is critical for long-running industrial processes where monitoring requirements may change.
#### Optimized Value Display Using Streaming Cache
**Improvement**: Modified the variable value refresh system to use cached values from the streaming process instead of making new PLC reads, improving efficiency and consistency with CSV data.
**Bug Fix - File Continuation Logic**:
- Added `using_modification_file` flag to track when a modification timestamp file is active
- Modified `setup_csv_file()` to respect this flag and continue using the modification file until the hour naturally changes
- Prevents the system from immediately reverting to standard hourly files after creating modification files
- Ensures data continuity in the intended timestamped file rather than switching back to regular rotation
**Rationale**: The original implementation made direct PLC reads every time the user clicked refresh, which was inefficient and could show different values than those being written to CSV files. Since the streaming system already reads values continuously, it makes more sense to display those exact values.
**Implementation**:
**Backend Cache System**:
- **Value Cache**: `last_read_values{}` stores the most recent values read during streaming
- **Timestamp Cache**: `last_read_timestamps{}` records when values were last read
- **Error Cache**: `last_read_errors{}` maintains specific error information for each variable
- **Automatic Updates**: Cache is updated every streaming cycle with actual CSV data
- **Cache Management**: Values are cleared when datasets are deactivated or streaming stops
**Smart Data Source Selection**:
- **Primary**: Uses cached values from streaming process (exactly what's in CSV)
- **Fallback**: Direct PLC read only when no cache available (streaming not active)
- **Source Indication**: Clear labeling of data source for user awareness
- **Consistency**: Ensures displayed values match CSV file contents
**Enhanced User Interface**:
- **Source Indicators**: Visual icons showing data origin (📊 cache vs 🔗 direct)
- **Timestamp Accuracy**: Uses actual read timestamp from streaming process
- **Performance Indication**: Shows "from streaming cache" or "direct PLC read"
- **Cache Availability**: Automatically falls back to direct reads when needed
**Operational Benefits**:
- **Reduced PLC Load**: Eliminates unnecessary duplicate reads from PLC
- **Data Consistency**: Shows exactly the same values being written to CSV
- **Better Performance**: No communication delays when displaying cached values
- **Network Efficiency**: Reduces PLC network traffic and potential timeouts
- **Real-time Accuracy**: Values reflect the actual streaming process state
**User Experience Improvements**:
- **Instant Response**: Cached values display immediately without PLC communication
- **Source Transparency**: Users know whether they're seeing live or cached data
- **Streaming Awareness**: Interface clearly indicates when streaming is providing data
- **Fallback Reliability**: System still works when streaming is not active
**Technical Implementation**:
- **Cache Integration**: `DataStreamer.get_cached_dataset_values()` provides cached data access
- **Source Tracking**: Response includes source information (`cache` vs `plc_direct`)
- **Error Preservation**: Cached errors from streaming process are preserved and displayed
- **Automatic Cleanup**: Cache is cleared when streaming stops or datasets are deactivated

View File

@ -1868,8 +1868,565 @@
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T12:29:10.487361",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T13:21:24.048901",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:13:44.454351",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:13:54.748671",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:14:02.488101",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:15:52.777059",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:15:58.811720",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:16:31.631302",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:16:43.269892",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:17:10.303046",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:18:02.206369",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:21:22.011786",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:21:30.848754",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:24:17.564257",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:25:06.849376",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T14:45:27.402524",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T16:00:07.378875",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T22:01:19.153301",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 10.1.33.11",
"details": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T22:01:34.152270",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 10.1.33.11",
"details": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T22:01:53.799674",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 127.0.0.1:0/2",
"details": {
"old_config": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
},
"new_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
}
}
},
{
"timestamp": "2025-07-19T22:01:58.549741",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T22:06:22.607781",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T22:11:36.698746",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 127.0.0.1:0/1",
"details": {
"old_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
},
"new_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
}
},
{
"timestamp": "2025-07-19T22:11:41.974399",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
},
{
"timestamp": "2025-07-19T22:11:44.917328",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
},
{
"timestamp": "2025-07-19T22:12:28.865152",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 127.0.0.1:0/0",
"details": {
"old_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
},
"new_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
}
}
},
{
"timestamp": "2025-07-19T22:12:33.010656",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
}
},
{
"timestamp": "2025-07-19T22:12:45.632358",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
}
},
{
"timestamp": "2025-07-19T22:16:32.861156",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T22:16:42.440110",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
}
},
{
"timestamp": "2025-07-19T22:16:53.326588",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
}
},
{
"timestamp": "2025-07-19T22:17:46.848148",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 127.0.0.1:0/1",
"details": {
"old_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 0
},
"new_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
}
},
{
"timestamp": "2025-07-19T22:17:51.814340",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
},
{
"timestamp": "2025-07-19T22:18:01.265041",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
},
{
"timestamp": "2025-07-19T23:27:08.208450",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T23:27:20.489074",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
}
},
{
"timestamp": "2025-07-19T23:27:33.435449",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 10.1.33.11:0/2",
"details": {
"old_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 1
},
"new_config": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
}
}
},
{
"timestamp": "2025-07-19T23:27:35.355205",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 10.1.33.11",
"details": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:27:49.857326",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 10.1.33.11",
"details": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:28:03.411972",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 127.0.0.1:0/2",
"details": {
"old_config": {
"ip": "10.1.33.11",
"rack": 0,
"slot": 2
},
"new_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
}
}
},
{
"timestamp": "2025-07-19T23:28:07.009421",
"level": "error",
"event_type": "plc_connection_failed",
"message": "Failed to connect to PLC 127.0.0.1",
"details": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:29:14.947031",
"level": "info",
"event_type": "config_change",
"message": "PLC configuration updated: 10.1.33.249:0/2",
"details": {
"old_config": {
"ip": "127.0.0.1",
"rack": 0,
"slot": 2
},
"new_config": {
"ip": "10.1.33.249",
"rack": 0,
"slot": 2
}
}
},
{
"timestamp": "2025-07-19T23:29:18.132063",
"level": "info",
"event_type": "plc_connection",
"message": "Successfully connected to PLC 10.1.33.249",
"details": {
"ip": "10.1.33.249",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:30:10.893919",
"level": "info",
"event_type": "dataset_activated",
"message": "Dataset activated: DAR",
"details": {
"dataset_id": "dar",
"variables_count": 6,
"streaming_count": 4,
"prefix": "dar"
}
},
{
"timestamp": "2025-07-19T23:30:10.896921",
"level": "info",
"event_type": "streaming_started",
"message": "Multi-dataset streaming started: 1 datasets activated",
"details": {
"activated_datasets": 1,
"total_datasets": 2,
"udp_host": "127.0.0.1",
"udp_port": 9870
}
},
{
"timestamp": "2025-07-19T23:33:32.589873",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T23:36:44.619374",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T23:36:59.720797",
"level": "info",
"event_type": "plc_connection",
"message": "Successfully connected to PLC 10.1.33.249",
"details": {
"ip": "10.1.33.249",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:37:00.935380",
"level": "info",
"event_type": "dataset_activated",
"message": "Dataset activated: DAR",
"details": {
"dataset_id": "dar",
"variables_count": 6,
"streaming_count": 4,
"prefix": "dar"
}
},
{
"timestamp": "2025-07-19T23:37:00.938380",
"level": "info",
"event_type": "streaming_started",
"message": "Multi-dataset streaming started: 1 datasets activated",
"details": {
"activated_datasets": 1,
"total_datasets": 2,
"udp_host": "127.0.0.1",
"udp_port": 9870
}
},
{
"timestamp": "2025-07-19T23:43:46.929550",
"level": "info",
"event_type": "application_started",
"message": "Application initialization completed successfully",
"details": {}
},
{
"timestamp": "2025-07-19T23:43:54.596824",
"level": "info",
"event_type": "plc_connection",
"message": "Successfully connected to PLC 10.1.33.249",
"details": {
"ip": "10.1.33.249",
"rack": 0,
"slot": 2
}
},
{
"timestamp": "2025-07-19T23:43:55.377313",
"level": "info",
"event_type": "dataset_activated",
"message": "Dataset activated: DAR",
"details": {
"dataset_id": "dar",
"variables_count": 6,
"streaming_count": 4,
"prefix": "dar"
}
},
{
"timestamp": "2025-07-19T23:43:55.380321",
"level": "info",
"event_type": "streaming_started",
"message": "Multi-dataset streaming started: 1 datasets activated",
"details": {
"activated_datasets": 1,
"total_datasets": 2,
"udp_host": "127.0.0.1",
"udp_port": 9870
}
}
],
"last_updated": "2025-07-19T12:18:29.904793",
"total_entries": 172
"last_updated": "2025-07-19T23:43:55.380321",
"total_entries": 226
}

View File

@ -34,6 +34,17 @@ class ConfigManager:
self.udp_config = {"host": "127.0.0.1", "port": 9870}
self.sampling_interval = 0.1
# CSV recording configuration
self.csv_config = {
"records_directory": "records", # Base directory for CSV files
"rotation_enabled": True,
"max_size_mb": 1000, # Maximum total size in MB (1GB default)
"max_days": 30, # Maximum days to keep files
"max_hours": None, # Maximum hours to keep files (None = use max_days)
"cleanup_interval_hours": 24, # How often to run cleanup (hours)
"last_cleanup": None, # Last cleanup timestamp
}
# Datasets management
self.datasets = {} # Dictionary of dataset_id -> dataset_config
self.active_datasets = set() # Set of active dataset IDs
@ -71,6 +82,10 @@ class ConfigManager:
self.sampling_interval = config.get(
"sampling_interval", self.sampling_interval
)
self.csv_config = {
**self.csv_config,
**config.get("csv_config", {}),
}
if self.logger:
self.logger.info(f"Configuration loaded from {self.config_file}")
@ -88,6 +103,7 @@ class ConfigManager:
"plc_config": self.plc_config,
"udp_config": self.udp_config,
"sampling_interval": self.sampling_interval,
"csv_config": self.csv_config,
}
with open(self.config_file, "w") as f:
json.dump(config, f, indent=4)
@ -258,6 +274,77 @@ class ConfigManager:
self.save_configuration()
return {"old_interval": old_interval, "new_interval": interval}
# CSV Configuration Methods
def update_csv_config(self, **kwargs):
"""Update CSV recording configuration"""
old_config = self.csv_config.copy()
# Validate and update configuration
valid_keys = {
"records_directory",
"rotation_enabled",
"max_size_mb",
"max_days",
"max_hours",
"cleanup_interval_hours",
}
for key, value in kwargs.items():
if key in valid_keys:
# Validate specific values
if key == "records_directory" and not isinstance(value, str):
raise ValueError("records_directory must be a string")
elif key == "rotation_enabled" and not isinstance(value, bool):
raise ValueError("rotation_enabled must be a boolean")
elif key in ["max_size_mb", "max_days", "cleanup_interval_hours"]:
if value is not None and (
not isinstance(value, (int, float)) or value <= 0
):
raise ValueError(f"{key} must be a positive number or None")
elif key == "max_hours":
if value is not None and (
not isinstance(value, (int, float)) or value <= 0
):
raise ValueError("max_hours must be a positive number or None")
self.csv_config[key] = value
self.save_configuration()
return {"old_config": old_config, "new_config": self.csv_config}
def get_csv_directory_path(self) -> str:
"""Get the configured CSV directory path"""
return self.csv_config["records_directory"]
def get_csv_file_directory_path(self) -> str:
"""Get the directory path for current day's CSV files"""
now = datetime.now()
day_folder = now.strftime("%d-%m-%Y")
return os.path.join(self.get_csv_directory_path(), day_folder)
def should_perform_cleanup(self) -> bool:
"""Check if cleanup should be performed based on interval"""
if not self.csv_config["rotation_enabled"]:
return False
last_cleanup = self.csv_config.get("last_cleanup")
if not last_cleanup:
return True
try:
last_cleanup_dt = datetime.fromisoformat(last_cleanup)
hours_since_cleanup = (
datetime.now() - last_cleanup_dt
).total_seconds() / 3600
return hours_since_cleanup >= self.csv_config["cleanup_interval_hours"]
except (ValueError, TypeError):
return True
def mark_cleanup_performed(self):
"""Mark that cleanup was performed"""
self.csv_config["last_cleanup"] = datetime.now().isoformat()
self.save_configuration()
# Dataset Management Methods
def create_dataset(
self, dataset_id: str, name: str, prefix: str, sampling_interval: float = None

View File

@ -2,6 +2,7 @@ import os
import atexit
import psutil
import time
import platform
from typing import Optional, Callable
@ -15,75 +16,146 @@ class InstanceManager:
self.lock_fd = None
self._cleanup_registered = False
def _safe_remove_lock_file(self, max_retries=5, delay=0.2) -> bool:
"""Safely remove lock file with retry logic for Windows compatibility"""
if not os.path.exists(self.lock_file):
return True
for attempt in range(max_retries):
try:
os.remove(self.lock_file)
return True
except PermissionError as e:
if platform.system() == "Windows" and attempt < max_retries - 1:
if self.logger:
self.logger.debug(f"Lock file removal attempt {attempt + 1} failed (Windows), retrying in {delay}s...")
time.sleep(delay)
delay *= 1.5 # Exponential backoff
else:
if self.logger:
self.logger.warning(f"Failed to remove lock file after {max_retries} attempts: {e}")
return False
except Exception as e:
if self.logger:
self.logger.warning(f"Unexpected error removing lock file: {e}")
return False
return False
def acquire_instance_lock(self) -> bool:
"""Acquire lock to ensure single instance execution"""
"""Acquire lock to ensure single instance execution with improved stale lock detection"""
try:
print("🔍 Checking for existing instances...")
# Check if lock file exists
if os.path.exists(self.lock_file):
# Read PID from existing lock file
with open(self.lock_file, "r") as f:
try:
lock_should_be_removed = False
removal_reason = ""
old_pid = None
# Try to read PID from existing lock file
try:
with open(self.lock_file, "r") as f:
old_pid = int(f.read().strip())
if self.logger:
self.logger.info(f"Found existing lock file with PID: {old_pid}")
# Check if process is still running
if psutil.pid_exists(old_pid):
# Get process info to verify it's our application
try:
proc = psutil.Process(old_pid)
cmdline = " ".join(proc.cmdline())
# More specific check - only block if it's really our application
if (
(
"main.py" in cmdline
and "S7_snap7_Stremer_n_Log" in cmdline
)
or ("plc_streamer" in cmdline.lower())
or ("PLCDataStreamer" in cmdline)
):
if self.logger:
self.logger.error(
f"Another instance is already running (PID: {old_pid})"
)
self.logger.error(f"Command line: {cmdline}")
return False
else:
# Different Python process, remove stale lock
if self.logger:
self.logger.info(
f"Found different Python process (PID: {old_pid}), removing stale lock"
)
os.remove(self.lock_file)
except (psutil.NoSuchProcess, psutil.AccessDenied):
# Process doesn't exist or can't access, continue
pass
# Check if process is still running
if psutil.pid_exists(old_pid):
# Get process info to verify it's our application
try:
proc = psutil.Process(old_pid)
cmdline = " ".join(proc.cmdline())
# More specific check - only block if it's really our application
if (
("main.py" in cmdline and "S7_snap7_Stremer_n_Log" in cmdline)
or ("plc_streamer" in cmdline.lower())
or ("PLCDataStreamer" in cmdline)
):
print(f"❌ Another instance of PLC Streamer is already running (PID: {old_pid})")
print(f" Command: {cmdline}")
print("💡 Stop the other instance first or wait for it to finish")
if self.logger:
self.logger.error(f"Another instance is already running (PID: {old_pid})")
self.logger.error(f"Command line: {cmdline}")
return False
else:
# Different Python process, remove stale lock
lock_should_be_removed = True
removal_reason = f"Found lock file from different application (PID {old_pid})"
if self.logger:
self.logger.info(f"Found different Python process (PID: {old_pid}), removing stale lock")
self.logger.info(f"Different process command: {cmdline}")
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
# Process disappeared or can't access it, remove stale lock
lock_should_be_removed = True
removal_reason = f"Process {old_pid} is not accessible"
if self.logger:
self.logger.info(f"Process {old_pid} is not accessible, removing stale lock")
else:
# Old process is dead, remove stale lock file
os.remove(self.lock_file)
lock_should_be_removed = True
removal_reason = f"Found stale lock file (PID {old_pid} doesn't exist)"
if self.logger:
self.logger.info("Removed stale lock file")
self.logger.info(f"Removed stale lock file - PID {old_pid} doesn't exist")
except (ValueError, IOError):
# Invalid lock file, remove it
os.remove(self.lock_file)
except (ValueError, IOError, UnicodeDecodeError):
# Invalid lock file, remove it
lock_should_be_removed = True
removal_reason = "Invalid or corrupted lock file"
if self.logger:
self.logger.info("Removing invalid lock file")
# Perform safe removal if needed
if lock_should_be_removed:
print(f"🧹 {removal_reason}, removing it")
if not self._safe_remove_lock_file():
print(f"⚠️ Unable to remove lock file. Trying to continue...")
if self.logger:
self.logger.info("Removed invalid lock file")
self.logger.warning("Failed to remove lock file, but continuing with initialization")
# Create new lock file with current PID
with open(self.lock_file, "w") as f:
f.write(str(os.getpid()))
# Create new lock file with current PID (with retry for Windows)
lock_created = False
for attempt in range(3):
try:
with open(self.lock_file, "w") as f:
f.write(str(os.getpid()))
lock_created = True
break
except PermissionError as e:
if platform.system() == "Windows" and attempt < 2:
if self.logger:
self.logger.debug(f"Lock file creation attempt {attempt + 1} failed, retrying...")
time.sleep(0.5)
else:
raise e
if not lock_created:
raise PermissionError("Unable to create lock file after multiple attempts")
# Register cleanup function only once
if not self._cleanup_registered:
atexit.register(self.release_instance_lock)
self._cleanup_registered = True
print(f"✅ Instance lock acquired successfully (PID: {os.getpid()})")
if self.logger:
self.logger.info(
f"Instance lock acquired: {self.lock_file} (PID: {os.getpid()})"
)
self.logger.info(f"Instance lock acquired: {self.lock_file} (PID: {os.getpid()})")
return True
except Exception as e:
print(f"⚠️ Error acquiring instance lock: {e}")
if self.logger:
self.logger.error(f"Error acquiring instance lock: {e}")
return False
@ -91,11 +163,13 @@ class InstanceManager:
def release_instance_lock(self):
"""Release instance lock"""
try:
# Remove lock file
if os.path.exists(self.lock_file):
os.remove(self.lock_file)
# Remove lock file using safe removal method
if self._safe_remove_lock_file():
if self.logger:
self.logger.info("Instance lock released")
else:
if self.logger:
self.logger.warning("Lock file removal failed during release")
except Exception as e:
if self.logger:

View File

@ -264,6 +264,104 @@ class PLCClient:
return data
def read_multiple_variables_with_diagnostics(
self, variables: Dict[str, Dict[str, Any]]
) -> Dict[str, Any]:
"""Read multiple variables from the PLC with detailed error diagnostics"""
if not self.is_connected():
return {
"success": False,
"error": "PLC not connected",
"error_type": "connection_error",
"values": {},
"errors": {},
}
data = {}
errors = {}
success_count = 0
total_count = len(variables)
for var_name, var_config in variables.items():
try:
value = self.read_variable(var_config)
if value is not None:
data[var_name] = value
success_count += 1
else:
data[var_name] = None
errors[var_name] = (
"Read returned None - possible configuration error"
)
except ConnectionError as e:
data[var_name] = None
errors[var_name] = f"Connection error: {str(e)}"
if self.logger:
self.logger.error(f"Connection error reading {var_name}: {e}")
except TimeoutError as e:
data[var_name] = None
errors[var_name] = f"Timeout error: {str(e)}"
if self.logger:
self.logger.error(f"Timeout reading {var_name}: {e}")
except ValueError as e:
data[var_name] = None
errors[var_name] = f"Configuration error: {str(e)}"
if self.logger:
self.logger.error(f"Configuration error for {var_name}: {e}")
except Exception as e:
data[var_name] = None
error_msg = f"Unexpected error: {type(e).__name__}: {str(e)}"
errors[var_name] = error_msg
if self.logger:
self.logger.error(f"Unexpected error reading {var_name}: {e}")
# Determine overall success
if success_count == 0:
if total_count == 0:
return {
"success": True,
"message": "No variables to read",
"values": {},
"errors": {},
}
else:
return {
"success": False,
"error": "Failed to read any variables",
"error_type": "all_failed",
"values": data,
"errors": errors,
"stats": {
"success": 0,
"failed": total_count,
"total": total_count,
},
}
elif success_count < total_count:
return {
"success": True,
"warning": f"Partial success: {success_count}/{total_count} variables read",
"values": data,
"errors": errors,
"stats": {
"success": success_count,
"failed": total_count - success_count,
"total": total_count,
},
}
else:
return {
"success": True,
"message": f"Successfully read all {total_count} variables",
"values": data,
"errors": {},
"stats": {"success": success_count, "failed": 0, "total": total_count},
}
def get_connection_info(self) -> Dict[str, Any]:
"""Get current connection information"""
return {"connected": self.connected, "client_available": self.plc is not None}

View File

@ -461,6 +461,14 @@ class PLCDataStreamer:
"""Get recent events from the log"""
return self.event_logger.get_recent_events(limit)
def get_cached_dataset_values(self, dataset_id: str):
"""Get cached values for a dataset (values used for CSV generation)"""
return self.data_streamer.get_cached_dataset_values(dataset_id)
def has_cached_values(self, dataset_id: str) -> bool:
"""Check if dataset has cached values available"""
return self.data_streamer.has_cached_values(dataset_id)
# Auto-recovery and Instance Management
def attempt_auto_recovery(self):
"""Attempt to restore previous system state"""

View File

@ -44,6 +44,11 @@ class DataStreamer:
self.dataset_csv_hours = {} # dataset_id -> current hour
self.dataset_using_modification_files = {} # dataset_id -> bool
# Cache for last read values (exactly what's being written to CSV)
self.last_read_values = {} # dataset_id -> {var_name: value}
self.last_read_timestamps = {} # dataset_id -> timestamp
self.last_read_errors = {} # dataset_id -> {var_name: error_message}
def setup_udp_socket(self) -> bool:
"""Setup UDP socket for PlotJuggler communication"""
try:
@ -85,15 +90,17 @@ class DataStreamer:
def get_csv_directory_path(self) -> str:
"""Get the directory path for current day's CSV files"""
now = datetime.now()
day_folder = now.strftime("%d-%m-%Y")
return os.path.join("records", day_folder)
return self.config_manager.get_csv_file_directory_path()
def ensure_csv_directory(self):
"""Create CSV directory structure if it doesn't exist"""
directory = self.get_csv_directory_path()
Path(directory).mkdir(parents=True, exist_ok=True)
# Perform cleanup if needed
if self.config_manager.should_perform_cleanup():
self.perform_csv_cleanup()
def get_dataset_csv_file_path(
self, dataset_id: str, use_modification_timestamp: bool = False
) -> str:
@ -274,22 +281,135 @@ class DataStreamer:
def read_dataset_variables(
self, dataset_id: str, variables: Dict[str, Any]
) -> Dict[str, Any]:
"""Read all variables for a specific dataset"""
"""Read all variables for a specific dataset and update cache"""
data = {}
errors = {}
timestamp = datetime.now()
for var_name, var_config in variables.items():
try:
value = self.plc_client.read_variable(var_config)
data[var_name] = value
# Clear any previous error for this variable
if (
dataset_id in self.last_read_errors
and var_name in self.last_read_errors[dataset_id]
):
del self.last_read_errors[dataset_id][var_name]
except Exception as e:
if self.logger:
self.logger.warning(
f"Error reading variable {var_name} in dataset {dataset_id}: {e}"
)
data[var_name] = None
errors[var_name] = f"Read error: {str(e)}"
# Update cache with latest values and timestamp
self.last_read_values[dataset_id] = data.copy()
self.last_read_timestamps[dataset_id] = timestamp
# Update errors cache
if errors:
if dataset_id not in self.last_read_errors:
self.last_read_errors[dataset_id] = {}
self.last_read_errors[dataset_id].update(errors)
elif dataset_id in self.last_read_errors:
# Clear all errors if this read was completely successful
if all(value is not None for value in data.values()):
self.last_read_errors[dataset_id] = {}
return data
def get_cached_dataset_values(self, dataset_id: str) -> Dict[str, Any]:
"""Get cached values for a dataset (values used for CSV generation)"""
if dataset_id not in self.last_read_values:
return {
"success": False,
"error": "No cached values available",
"error_type": "no_cache",
"message": "Dataset has not been read yet or streaming is not active",
"values": {},
"errors": {},
"stats": {"success": 0, "failed": 0, "total": 0},
}
cached_values = self.last_read_values[dataset_id]
cached_errors = self.last_read_errors.get(dataset_id, {})
timestamp = self.last_read_timestamps.get(dataset_id)
# Calculate statistics
total_vars = len(cached_values)
success_vars = sum(1 for value in cached_values.values() if value is not None)
failed_vars = total_vars - success_vars
# Determine overall success
if total_vars == 0:
return {
"success": True,
"message": "No variables defined in dataset",
"values": {},
"errors": {},
"stats": {"success": 0, "failed": 0, "total": 0},
"timestamp": timestamp.isoformat() if timestamp else None,
"source": "cache",
}
elif success_vars == 0:
return {
"success": False,
"error": "All variables failed to read in last streaming cycle",
"error_type": "all_failed",
"values": cached_values,
"errors": cached_errors,
"stats": {"success": 0, "failed": failed_vars, "total": total_vars},
"timestamp": timestamp.isoformat() if timestamp else None,
"source": "cache",
}
elif failed_vars > 0:
return {
"success": True,
"warning": f"Partial success in last streaming cycle: {success_vars}/{total_vars} variables read",
"values": cached_values,
"errors": cached_errors,
"stats": {
"success": success_vars,
"failed": failed_vars,
"total": total_vars,
},
"timestamp": timestamp.isoformat() if timestamp else None,
"source": "cache",
}
else:
return {
"success": True,
"message": f"All {success_vars} variables read successfully in last streaming cycle",
"values": cached_values,
"errors": {},
"stats": {"success": success_vars, "failed": 0, "total": total_vars},
"timestamp": timestamp.isoformat() if timestamp else None,
"source": "cache",
}
def clear_cached_values(self, dataset_id: str = None):
"""Clear cached values for a dataset or all datasets"""
if dataset_id:
# Clear specific dataset
self.last_read_values.pop(dataset_id, None)
self.last_read_timestamps.pop(dataset_id, None)
self.last_read_errors.pop(dataset_id, None)
else:
# Clear all
self.last_read_values.clear()
self.last_read_timestamps.clear()
self.last_read_errors.clear()
def has_cached_values(self, dataset_id: str) -> bool:
"""Check if dataset has cached values available"""
return dataset_id in self.last_read_values and bool(
self.last_read_values[dataset_id]
)
def dataset_streaming_loop(self, dataset_id: str):
"""Streaming loop for a specific dataset"""
dataset_info = self.config_manager.datasets[dataset_id]
@ -478,6 +598,9 @@ class DataStreamer:
# Stop streaming thread for this dataset
self.stop_dataset_streaming(dataset_id)
# Clear cached values for this dataset since it's no longer active
self.clear_cached_values(dataset_id)
dataset_info = self.config_manager.datasets[dataset_id]
self.event_logger.log_event(
"info",
@ -594,6 +717,118 @@ class DataStreamer:
"""Get set of currently active dataset IDs"""
return self.config_manager.active_datasets.copy()
def perform_csv_cleanup(self):
"""Perform cleanup of old CSV files based on configuration"""
if not self.config_manager.csv_config["rotation_enabled"]:
return
try:
base_directory = self.config_manager.get_csv_directory_path()
if not os.path.exists(base_directory):
return
max_size_mb = self.config_manager.csv_config["max_size_mb"]
max_days = self.config_manager.csv_config["max_days"]
max_hours = self.config_manager.csv_config["max_hours"]
# Get all CSV files with their info
csv_files = []
total_size = 0
for day_folder in os.listdir(base_directory):
day_path = os.path.join(base_directory, day_folder)
if os.path.isdir(day_path):
for file_name in os.listdir(day_path):
if file_name.endswith(".csv"):
file_path = os.path.join(day_path, file_name)
if os.path.isfile(file_path):
stat = os.stat(file_path)
csv_files.append(
{
"path": file_path,
"size": stat.st_size,
"modified": stat.st_mtime,
"day_folder": day_folder,
}
)
total_size += stat.st_size
# Sort by modification time (oldest first)
csv_files.sort(key=lambda x: x["modified"])
files_to_delete = []
now = datetime.now()
# Check time-based limits
if max_hours is not None:
cutoff_time = now.timestamp() - (max_hours * 3600)
files_to_delete.extend(
[f for f in csv_files if f["modified"] < cutoff_time]
)
elif max_days is not None:
cutoff_time = now.timestamp() - (max_days * 24 * 3600)
files_to_delete.extend(
[f for f in csv_files if f["modified"] < cutoff_time]
)
# Check size-based limits
if max_size_mb is not None:
max_size_bytes = max_size_mb * 1024 * 1024
while total_size > max_size_bytes and csv_files:
oldest_file = csv_files.pop(0)
if oldest_file not in files_to_delete:
files_to_delete.append(oldest_file)
total_size -= oldest_file["size"]
# Delete files
deleted_count = 0
deleted_size = 0
for file_info in files_to_delete:
try:
os.remove(file_info["path"])
deleted_count += 1
deleted_size += file_info["size"]
# Remove empty day folders
day_folder_path = os.path.dirname(file_info["path"])
if os.path.exists(day_folder_path) and not os.listdir(
day_folder_path
):
os.rmdir(day_folder_path)
except Exception as e:
if self.logger:
self.logger.warning(
f"Could not delete CSV file {file_info['path']}: {e}"
)
# Log cleanup results
if deleted_count > 0:
deleted_size_mb = deleted_size / (1024 * 1024)
self.event_logger.log_event(
"info",
"csv_cleanup",
f"CSV cleanup completed: {deleted_count} files deleted ({deleted_size_mb:.1f} MB freed)",
{
"deleted_files": deleted_count,
"deleted_size_mb": round(deleted_size_mb, 1),
"max_size_mb": max_size_mb,
"max_days": max_days,
"max_hours": max_hours,
},
)
# Mark cleanup as performed
self.config_manager.mark_cleanup_performed()
except Exception as e:
if self.logger:
self.logger.error(f"Error during CSV cleanup: {e}")
self.event_logger.log_event(
"error", "csv_cleanup_failed", f"CSV cleanup failed: {str(e)}"
)
def get_streaming_stats(self) -> Dict[str, Any]:
"""Get streaming statistics"""
return {

582
main.py
View File

@ -6,6 +6,8 @@ from flask import (
redirect,
url_for,
send_from_directory,
Response,
stream_template,
)
import snap7
import snap7.util
@ -121,6 +123,171 @@ def update_udp_config():
return jsonify({"success": False, "message": str(e)}), 400
@app.route("/api/csv/config", methods=["GET"])
def get_csv_config():
"""Get CSV recording configuration"""
error_response = check_streamer_initialized()
if error_response:
return error_response
try:
csv_config = streamer.config_manager.csv_config.copy()
# Add current directory information
current_dir = streamer.config_manager.get_csv_directory_path()
csv_config["current_directory"] = os.path.abspath(current_dir)
csv_config["directory_exists"] = os.path.exists(current_dir)
# Add disk space info
disk_info = streamer.get_disk_space_info()
if disk_info:
csv_config["disk_space"] = disk_info
return jsonify({"success": True, "config": csv_config})
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 500
@app.route("/api/csv/config", methods=["POST"])
def update_csv_config():
"""Update CSV recording configuration"""
error_response = check_streamer_initialized()
if error_response:
return error_response
try:
data = request.get_json()
# Extract valid configuration parameters
config_updates = {}
valid_params = {
"records_directory",
"rotation_enabled",
"max_size_mb",
"max_days",
"max_hours",
"cleanup_interval_hours",
}
for param in valid_params:
if param in data:
config_updates[param] = data[param]
if not config_updates:
return (
jsonify(
{
"success": False,
"message": "No valid configuration parameters provided",
}
),
400,
)
# Update configuration
result = streamer.config_manager.update_csv_config(**config_updates)
return jsonify(
{
"success": True,
"message": "CSV configuration updated successfully",
"old_config": result["old_config"],
"new_config": result["new_config"],
}
)
except ValueError as e:
return jsonify({"success": False, "message": str(e)}), 400
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 500
@app.route("/api/csv/cleanup", methods=["POST"])
def trigger_csv_cleanup():
"""Manually trigger CSV cleanup"""
error_response = check_streamer_initialized()
if error_response:
return error_response
try:
# Perform cleanup
streamer.streamer.perform_csv_cleanup()
return jsonify(
{"success": True, "message": "CSV cleanup completed successfully"}
)
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 500
@app.route("/api/csv/directory/info", methods=["GET"])
def get_csv_directory_info():
"""Get information about CSV directory and files"""
error_response = check_streamer_initialized()
if error_response:
return error_response
try:
base_dir = streamer.config_manager.get_csv_directory_path()
info = {
"base_directory": os.path.abspath(base_dir),
"directory_exists": os.path.exists(base_dir),
"total_files": 0,
"total_size_mb": 0,
"oldest_file": None,
"newest_file": None,
"day_folders": [],
}
if os.path.exists(base_dir):
total_size = 0
oldest_time = None
newest_time = None
for day_folder in os.listdir(base_dir):
day_path = os.path.join(base_dir, day_folder)
if os.path.isdir(day_path):
day_info = {"name": day_folder, "files": 0, "size_mb": 0}
for file_name in os.listdir(day_path):
if file_name.endswith(".csv"):
file_path = os.path.join(day_path, file_name)
if os.path.isfile(file_path):
stat = os.stat(file_path)
file_size = stat.st_size
file_time = stat.st_mtime
info["total_files"] += 1
day_info["files"] += 1
total_size += file_size
day_info["size_mb"] += file_size / (1024 * 1024)
if oldest_time is None or file_time < oldest_time:
oldest_time = file_time
info["oldest_file"] = datetime.fromtimestamp(
file_time
).isoformat()
if newest_time is None or file_time > newest_time:
newest_time = file_time
info["newest_file"] = datetime.fromtimestamp(
file_time
).isoformat()
day_info["size_mb"] = round(day_info["size_mb"], 2)
info["day_folders"].append(day_info)
info["total_size_mb"] = round(total_size / (1024 * 1024), 2)
info["day_folders"].sort(key=lambda x: x["name"], reverse=True)
return jsonify({"success": True, "info": info})
except Exception as e:
return jsonify({"success": False, "message": str(e)}), 500
@app.route("/api/plc/connect", methods=["POST"])
def connect_plc():
"""Connect to PLC"""
@ -383,6 +550,213 @@ def get_streaming_variables():
return jsonify({"success": True, "streaming_variables": streaming_vars})
@app.route("/api/datasets/<dataset_id>/variables/values", methods=["GET"])
def get_dataset_variable_values(dataset_id):
"""Get current values of all variables in a dataset"""
error_response = check_streamer_initialized()
if error_response:
return error_response
try:
# Check if dataset exists
if dataset_id not in streamer.datasets:
return (
jsonify(
{"success": False, "message": f"Dataset '{dataset_id}' not found"}
),
404,
)
# Check if PLC is connected
if not streamer.plc_client.is_connected():
return (
jsonify(
{
"success": False,
"message": "PLC not connected. Please connect to PLC first.",
"values": {},
}
),
400,
)
# Get dataset variables
dataset_variables = streamer.get_dataset_variables(dataset_id)
if not dataset_variables:
return jsonify(
{
"success": True,
"message": "No variables defined in this dataset",
"values": {},
}
)
# First, try to get cached values (values used for CSV generation)
if streamer.has_cached_values(dataset_id):
read_result = streamer.get_cached_dataset_values(dataset_id)
# Convert timestamp from ISO format to readable format for consistency
if read_result.get("timestamp"):
try:
cached_timestamp = datetime.fromisoformat(read_result["timestamp"])
read_result["timestamp"] = cached_timestamp.strftime(
"%Y-%m-%d %H:%M:%S"
)
except:
pass # Keep original timestamp if conversion fails
else:
# Fallback: Read directly from PLC if no cached values available
read_result = streamer.plc_client.read_multiple_variables_with_diagnostics(
dataset_variables
)
read_result["source"] = "plc_direct"
# Extract values and handle diagnostics
if not read_result.get("success", False):
# Complete failure case
error_msg = read_result.get("error", "Unknown error reading variables")
error_type = read_result.get("error_type", "unknown")
# Log detailed error information
if streamer.logger:
streamer.logger.error(
f"Failed to read any variables from dataset '{dataset_id}': {error_msg}"
)
if read_result.get("errors"):
for var_name, var_error in read_result["errors"].items():
streamer.logger.error(f" Variable '{var_name}': {var_error}")
# Determine source for error case
error_source = read_result.get("source", "unknown")
return (
jsonify(
{
"success": False,
"message": error_msg,
"error_type": error_type,
"values": {},
"detailed_errors": read_result.get("errors", {}),
"stats": read_result.get("stats", {}),
"timestamp": read_result.get(
"timestamp", datetime.now().strftime("%Y-%m-%d %H:%M:%S")
),
"source": error_source,
"is_cached": error_source == "cache",
}
),
500,
)
# Success or partial success case
raw_values = read_result.get("values", {})
variable_errors = read_result.get("errors", {})
stats = read_result.get("stats", {})
# Format values for display
formatted_values = {}
error_details = {}
for var_name, value in raw_values.items():
if value is not None:
var_config = dataset_variables[var_name]
var_type = var_config.get("type", "unknown")
# Format value based on type
try:
if var_type == "real":
formatted_values[var_name] = (
f"{value:.3f}"
if isinstance(value, (int, float))
else str(value)
)
elif var_type == "bool":
formatted_values[var_name] = "TRUE" if value else "FALSE"
elif var_type in [
"int",
"uint",
"dint",
"udint",
"word",
"byte",
"sint",
"usint",
]:
formatted_values[var_name] = (
str(int(value))
if isinstance(value, (int, float))
else str(value)
)
else:
formatted_values[var_name] = str(value)
except Exception as format_error:
formatted_values[var_name] = "FORMAT_ERROR"
error_details[var_name] = f"Format error: {str(format_error)}"
else:
# Variable had an error - get the specific error message
specific_error = variable_errors.get(var_name, "Unknown error")
formatted_values[var_name] = "ERROR"
error_details[var_name] = specific_error
# Prepare response message
total_vars = stats.get("total", len(dataset_variables))
success_vars = stats.get("success", 0)
failed_vars = stats.get("failed", 0)
# Determine data source for message
data_source = read_result.get("source", "unknown")
source_text = ""
if data_source == "cache":
source_text = " (from last streaming cycle)"
elif data_source == "plc_direct":
source_text = " (direct PLC read)"
if failed_vars == 0:
message = f"Successfully read all {success_vars} variables{source_text}"
response_success = True
else:
message = f"Partial success: {success_vars}/{total_vars} variables read successfully, {failed_vars} failed{source_text}"
response_success = True # Still success if we got some values
# Log warnings for partial failures
if streamer.logger:
streamer.logger.warning(
f"Partial failure reading variables from dataset '{dataset_id}': {message}"
)
for var_name, var_error in error_details.items():
if formatted_values.get(var_name) == "ERROR":
streamer.logger.warning(f" Variable '{var_name}': {var_error}")
return jsonify(
{
"success": response_success,
"message": message,
"values": formatted_values,
"detailed_errors": error_details,
"stats": stats,
"timestamp": read_result.get(
"timestamp", datetime.now().strftime("%Y-%m-%d %H:%M:%S")
),
"warning": read_result.get("warning"),
"source": data_source,
"is_cached": data_source == "cache",
}
)
except Exception as e:
return (
jsonify(
{
"success": False,
"message": f"Error reading variable values: {str(e)}",
"values": {},
}
),
500,
)
# Dataset Management API Endpoints
@ -754,6 +1128,214 @@ def get_events():
return jsonify({"success": False, "error": str(e)}), 500
@app.route("/api/stream/variables", methods=["GET"])
def stream_variables():
"""Stream variable values in real-time using Server-Sent Events"""
error_response = check_streamer_initialized()
if error_response:
return error_response
def generate():
"""Generate SSE data stream"""
dataset_id = request.args.get("dataset_id")
interval = float(request.args.get("interval", 1.0)) # Default 1 second
if not dataset_id:
yield f"data: {json.dumps({'error': 'Dataset ID required'})}\n\n"
return
if dataset_id not in streamer.datasets:
yield f"data: {json.dumps({'error': f'Dataset {dataset_id} not found'})}\n\n"
return
# Send initial connection message
yield f"data: {json.dumps({'type': 'connected', 'message': 'SSE connection established'})}\n\n"
last_values = {}
while True:
try:
# Check if client is still connected
if request.headers.get("accept") != "text/event-stream":
break
# Get current variable values
if streamer.plc_client.is_connected():
dataset_variables = streamer.get_dataset_variables(dataset_id)
if dataset_variables:
# Try to get cached values first
if streamer.has_cached_values(dataset_id):
read_result = streamer.get_cached_dataset_values(dataset_id)
else:
# Fallback to direct PLC read
read_result = streamer.plc_client.read_multiple_variables_with_diagnostics(
dataset_variables
)
read_result["source"] = "plc_direct"
if read_result.get("success", False):
values = read_result.get("values", {})
timestamp = read_result.get(
"timestamp", datetime.now().isoformat()
)
# Format values for display
formatted_values = {}
for var_name, value in values.items():
if value is not None:
var_config = dataset_variables[var_name]
var_type = var_config.get("type", "unknown")
try:
if var_type == "real":
formatted_values[var_name] = (
f"{value:.3f}"
if isinstance(value, (int, float))
else str(value)
)
elif var_type == "bool":
formatted_values[var_name] = (
"TRUE" if value else "FALSE"
)
elif var_type in [
"int",
"uint",
"dint",
"udint",
"word",
"byte",
"sint",
"usint",
]:
formatted_values[var_name] = (
str(int(value))
if isinstance(value, (int, float))
else str(value)
)
else:
formatted_values[var_name] = str(value)
except:
formatted_values[var_name] = "FORMAT_ERROR"
else:
formatted_values[var_name] = "ERROR"
# Only send if values changed
if formatted_values != last_values:
data = {
"type": "values",
"values": formatted_values,
"timestamp": timestamp,
"source": read_result.get("source", "unknown"),
"stats": read_result.get("stats", {}),
}
yield f"data: {json.dumps(data)}\n\n"
last_values = formatted_values.copy()
else:
# Send error data
error_data = {
"type": "error",
"message": read_result.get("error", "Unknown error"),
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(error_data)}\n\n"
else:
# No variables in dataset
data = {
"type": "no_variables",
"message": "No variables defined in this dataset",
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(data)}\n\n"
else:
# PLC not connected
data = {
"type": "plc_disconnected",
"message": "PLC not connected",
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(data)}\n\n"
time.sleep(interval)
except Exception as e:
error_data = {
"type": "error",
"message": f"Stream error: {str(e)}",
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(error_data)}\n\n"
time.sleep(interval)
return Response(
generate(),
mimetype="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Cache-Control",
},
)
@app.route("/api/stream/status", methods=["GET"])
def stream_status():
"""Stream application status in real-time using Server-Sent Events"""
error_response = check_streamer_initialized()
if error_response:
return error_response
def generate():
"""Generate SSE status stream"""
interval = float(request.args.get("interval", 2.0)) # Default 2 seconds
last_status = None
# Send initial connection message
yield f"data: {json.dumps({'type': 'connected', 'message': 'Status stream connected'})}\n\n"
while True:
try:
# Check if client is still connected
if request.headers.get("accept") != "text/event-stream":
break
# Get current status
current_status = streamer.get_status()
# Only send if status changed
if current_status != last_status:
data = {
"type": "status",
"status": current_status,
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(data)}\n\n"
last_status = current_status
time.sleep(interval)
except Exception as e:
error_data = {
"type": "error",
"message": f"Status stream error: {str(e)}",
"timestamp": datetime.now().isoformat(),
}
yield f"data: {json.dumps(error_data)}\n\n"
time.sleep(interval)
return Response(
generate(),
mimetype="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Cache-Control",
},
)
def graceful_shutdown():
"""Perform graceful shutdown"""
print("\n⏹️ Performing graceful shutdown...")

View File

@ -1,6 +1,6 @@
{
"plc_config": {
"ip": "10.1.33.11",
"ip": "10.1.33.249",
"rack": 0,
"slot": 2
},
@ -8,5 +8,14 @@
"host": "127.0.0.1",
"port": 9870
},
"sampling_interval": 0.1
"sampling_interval": 0.1,
"csv_config": {
"records_directory": "records",
"rotation_enabled": true,
"max_size_mb": 1000,
"max_days": 30,
"max_hours": null,
"cleanup_interval_hours": 24,
"last_cleanup": "2025-07-19T23:30:11.005072"
}
}

View File

@ -70,5 +70,5 @@
],
"current_dataset_id": "dar",
"version": "1.0",
"last_update": "2025-07-18T16:14:57.607742"
"last_update": "2025-07-19T23:43:55.376313"
}

View File

@ -7,5 +7,5 @@
]
},
"auto_recovery_enabled": true,
"last_update": "2025-07-18T16:14:48.202036"
"last_update": "2025-07-19T23:43:55.380321"
}

File diff suppressed because it is too large Load Diff