Compare commits
No commits in common. "5caa74fa275dd7385a0669a70fb0cb53df6f90c5" and "d99d4394556404629bb43cceb9b1007da89989f6" have entirely different histories.
5caa74fa27
...
d99d439455
|
@ -693,40 +693,3 @@ ChartjsPlot render → Chart.js integration → streaming setup
|
|||
- Causa: El `Form` usaba `formData={data[key]}` y no controlábamos `onChange` en modo edición, por lo que cualquier re-render restauraba el valor original.
|
||||
- Solución: `FormTable.jsx` ahora usa un estado local `editingFormData` cuando `editingKey === key`. Se inicializa al pulsar Edit, `onChange` actualiza `editingFormData`, y `formData` del `Form` se alimenta de ese estado hasta guardar o cancelar.
|
||||
- Impacto: Al editar un item, los cambios entre campos se mantienen correctamente hasta pulsar Save.
|
||||
|
||||
## 2025-08-14
|
||||
|
||||
- Solicitud (resumen): Implementar carga de datos históricos desde archivos CSV al iniciar plots para que puedan comenzar inmediatamente con datos del span definido. Corrección de errores de Chart.js relacionados con addEventListener en elementos null.
|
||||
|
||||
- Decisiones y cambios:
|
||||
- Se corrigió error crítico en `frontend/src/components/ChartjsPlot.jsx` donde `addEventListener` se llamaba en elementos DOM null durante inicialización/cleanup de Chart.js.
|
||||
- Se agregaron validaciones exhaustivas de DOM: verificación de canvas montado, contexto 2D disponible, elemento en DOM tree antes de crear charts.
|
||||
- Se mejoró la función de cleanup con try-catch para evitar errores durante desmontaje de componentes, especialmente en React StrictMode.
|
||||
- Se habilitó la funcionalidad de carga de datos históricos que estaba temporalmente desactivada.
|
||||
- La API `/api/plots/historical` ya existía y funcionaba correctamente con pandas para leer archivos CSV organizados por fecha (DD-MM-YYYY).
|
||||
|
||||
- Conocimientos técnicos relevantes:
|
||||
- Los archivos CSV se almacenan en `records/{fecha}/` con formato `{prefix}_{hora}.csv` y contienen timestamp + variables de datasets.
|
||||
- La carga histórica busca datos en el tiempo de ventana especificado (time_window) y los pre-popula en los datasets del chart.
|
||||
- Chart.js con streaming plugin requiere validación estricta de DOM para evitar errores de event listeners en elementos null.
|
||||
- El backend maneja retrocompatibilidad para formatos de variables (arrays, objetos con keys, formato con variable_name).
|
||||
|
||||
- Arquitectura de carga histórica:
|
||||
- Frontend: `loadHistoricalData()` → POST `/api/plots/historical` con variables y time_window
|
||||
- Backend: busca archivos CSV en fechas relevantes, filtra por rango temporal, extrae datos de variables matching
|
||||
- Chart.js: pre-popula datasets con datos históricos ordenados cronológicamente antes de iniciar streaming en tiempo real
|
||||
- Beneficio: plots muestran contexto inmediato sin esperar nuevos datos del PLC
|
||||
|
||||
- Corrección de errores adicionales:
|
||||
- **Error HTTP 500**: Agregado debugging exhaustivo en endpoint `/api/plots/historical` con logs detallados para diagnosticar problemas de pandas, archivos CSV y rutas.
|
||||
- **Error "Canvas is already in use"**: Mejorado sistema de validación y cleanup de Chart.js para React StrictMode que ejecuta useEffect doble en desarrollo.
|
||||
- **Error "Cannot set properties of null"**: Fortalecida validación de DOM antes de crear charts, verificando canvas montado, contexto 2D disponible y elemento en DOM tree.
|
||||
- **Delay aumentado**: React StrictMode requiere 50ms en lugar de 10ms para completar cleanup entre renders.
|
||||
- **Validación final**: Verificación comprehensiva antes de crear chart que previene conflictos de canvas con registro de Chart.js.
|
||||
|
||||
- Estados de seguridad implementados:
|
||||
- Verificación de dependencias Chart.js al inicio del componente
|
||||
- Validación de canvas DOM antes de obtener contexto 2D
|
||||
- Cleanup agresivo con try-catch para evitar errores en desmontaje
|
||||
- Detección de charts existentes en registro Chart.js antes de crear nuevos
|
||||
- Manejo de errores con estados de loading/error que informan problemas al usuario
|
||||
|
|
107
OFFLINE_USAGE.md
107
OFFLINE_USAGE.md
|
@ -1,107 +0,0 @@
|
|||
# Offline Usage Configuration
|
||||
|
||||
## Overview
|
||||
The application has been configured to work completely offline without any CDN dependencies.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Chart.js Libraries Migration
|
||||
**Before (CDN Dependencies):**
|
||||
- Chart.js loaded from `https://cdn.jsdelivr.net/npm/chart.js@3.9.1`
|
||||
- Luxon from `https://cdn.jsdelivr.net/npm/luxon@2`
|
||||
- Chart.js adapter from `https://cdn.jsdelivr.net/npm/chartjs-adapter-luxon@1.3.1`
|
||||
- Zoom plugin from `https://unpkg.com/chartjs-plugin-zoom@1.2.1`
|
||||
- Streaming plugin from `https://cdn.jsdelivr.net/npm/chartjs-plugin-streaming@2.0.0`
|
||||
|
||||
**After (NPM Dependencies):**
|
||||
All Chart.js libraries are now installed as npm packages and bundled with the application:
|
||||
```json
|
||||
"chart.js": "^3.9.1",
|
||||
"chartjs-adapter-luxon": "^1.3.1",
|
||||
"chartjs-plugin-streaming": "^2.0.0",
|
||||
"chartjs-plugin-zoom": "^1.2.1",
|
||||
"luxon": "^2.5.2"
|
||||
```
|
||||
|
||||
### 2. Chart.js Setup Module
|
||||
Created `frontend/src/utils/chartSetup.js` that:
|
||||
- Imports all Chart.js components as ES modules
|
||||
- Registers all required plugins (zoom, streaming, time scales)
|
||||
- Makes Chart.js available globally for existing components
|
||||
- Provides console confirmation of successful setup
|
||||
|
||||
### 3. Application Entry Point
|
||||
Modified `frontend/src/main.jsx` to import the Chart.js setup before rendering the application.
|
||||
|
||||
### 4. Updated HTML Template
|
||||
Removed all CDN script tags from `frontend/index.html`.
|
||||
|
||||
## Verification
|
||||
|
||||
### Build Verification
|
||||
The application builds successfully without any external dependencies:
|
||||
```bash
|
||||
cd frontend
|
||||
npm run build
|
||||
```
|
||||
|
||||
### Development Server
|
||||
The development server runs without internet connection:
|
||||
```bash
|
||||
cd frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Runtime Verification
|
||||
- No network requests to external CDNs
|
||||
- All Chart.js functionality preserved (zooming, streaming, real-time plots)
|
||||
- Completely self-contained in bundled JavaScript
|
||||
|
||||
## Backend Offline Compliance
|
||||
|
||||
The Python backend already uses only local dependencies:
|
||||
- Flask for web server
|
||||
- python-snap7 for PLC communication
|
||||
- Local file-based configuration
|
||||
- No external API calls or services
|
||||
|
||||
## Deployment for Offline Use
|
||||
|
||||
### Frontend Production Build
|
||||
```bash
|
||||
cd frontend
|
||||
npm run build
|
||||
```
|
||||
The `dist/` folder contains all necessary files with no external dependencies.
|
||||
|
||||
### Complete Offline Package
|
||||
The entire application (backend + frontend) can be deployed on systems without internet access:
|
||||
|
||||
1. **Python Requirements**: Install from `requirements.txt`
|
||||
2. **Frontend**: Use built files from `dist/` folder
|
||||
3. **PLC Communication**: Requires `snap7.dll` in system PATH
|
||||
4. **Configuration**: All JSON-based, stored locally
|
||||
|
||||
## Chart.js Feature Compatibility
|
||||
|
||||
All existing Chart.js features remain functional:
|
||||
- ✅ Real-time streaming plots
|
||||
- ✅ Zoom and pan functionality
|
||||
- ✅ Time-based X-axis with Luxon adapter
|
||||
- ✅ Multiple dataset support
|
||||
- ✅ Dynamic color assignment
|
||||
- ✅ Plot session management
|
||||
- ✅ CSV data export integration
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Global vs ES Module Access
|
||||
The setup maintains backward compatibility by making Chart.js available both ways:
|
||||
- **Global**: `window.Chart` (for existing components)
|
||||
- **ES Module**: `import ChartJS from './utils/chartSetup.js'` (for new components)
|
||||
|
||||
### Bundle Size Impact
|
||||
The Chart.js libraries add approximately ~400KB to the bundle (gzipped), which is acceptable for offline industrial applications.
|
||||
|
||||
### Browser Compatibility
|
||||
All dependencies support modern browsers without requiring polyfills for the target deployment environment.
|
File diff suppressed because it is too large
Load Diff
|
@ -32,13 +32,6 @@
|
|||
"streaming": true,
|
||||
"symbol": "AUX Blink_1.0S",
|
||||
"type": "real"
|
||||
},
|
||||
{
|
||||
"configType": "symbol",
|
||||
"area": "db",
|
||||
"streaming": false,
|
||||
"symbol": "AUX Blink_1.6S",
|
||||
"type": "real"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -5,27 +5,14 @@
|
|||
"line_tension": 0,
|
||||
"name": "UR29",
|
||||
"point_hover_radius": 4,
|
||||
"point_radius": 0,
|
||||
"stacked": false,
|
||||
"point_radius": 4,
|
||||
"stepped": true,
|
||||
"time_window": 10,
|
||||
"time_window": 20,
|
||||
"trigger_enabled": false,
|
||||
"trigger_on_true": true,
|
||||
"trigger_variable": null,
|
||||
"y_max": null,
|
||||
"y_min": null
|
||||
},
|
||||
{
|
||||
"id": "Clock",
|
||||
"line_tension": 0,
|
||||
"name": "Clock",
|
||||
"point_hover_radius": 4,
|
||||
"point_radius": 1,
|
||||
"stacked": false,
|
||||
"stepped": true,
|
||||
"time_window": 10,
|
||||
"trigger_enabled": false,
|
||||
"trigger_on_true": true
|
||||
}
|
||||
]
|
||||
}
|
|
@ -23,34 +23,8 @@
|
|||
"variable_name": "AUX Blink_1.0S",
|
||||
"color": "#3498db",
|
||||
"line_width": 2,
|
||||
"y_axis": "right",
|
||||
"y_axis": "left",
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"variable_name": "AUX Blink_1.6S",
|
||||
"color": "#630de3",
|
||||
"line_width": 2,
|
||||
"y_axis": "right",
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"plot_id": "Clock",
|
||||
"variables": [
|
||||
{
|
||||
"color": "#3498db",
|
||||
"enabled": true,
|
||||
"line_width": 2,
|
||||
"variable_name": "AUX Blink_1.0S",
|
||||
"y_axis": "left"
|
||||
},
|
||||
{
|
||||
"color": "#87db33",
|
||||
"enabled": true,
|
||||
"line_width": 2,
|
||||
"variable_name": "AUX Blink_1.6S",
|
||||
"y_axis": "left"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -84,12 +84,6 @@
|
|||
"minimum": 0,
|
||||
"maximum": 15,
|
||||
"default": 4
|
||||
},
|
||||
"stacked": {
|
||||
"type": "boolean",
|
||||
"title": "Stacked Linear",
|
||||
"description": "Enable stacked Y-axes for multi-axis visualization",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"required": ["id", "name", "time_window"]
|
||||
|
|
|
@ -18,7 +18,6 @@
|
|||
"trigger_on_true",
|
||||
"line_tension",
|
||||
"stepped",
|
||||
"stacked",
|
||||
"point_radius",
|
||||
"point_hover_radius"
|
||||
],
|
||||
|
@ -64,25 +63,19 @@
|
|||
[
|
||||
{
|
||||
"name": "line_tension",
|
||||
"width": 4
|
||||
"width": 3
|
||||
},
|
||||
{
|
||||
"name": "stepped",
|
||||
"width": 4
|
||||
"width": 3
|
||||
},
|
||||
{
|
||||
"name": "stacked",
|
||||
"width": 4
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"name": "point_radius",
|
||||
"width": 6
|
||||
"width": 3
|
||||
},
|
||||
{
|
||||
"name": "point_hover_radius",
|
||||
"width": 6
|
||||
"width": 3
|
||||
}
|
||||
]
|
||||
],
|
||||
|
@ -133,10 +126,6 @@
|
|||
"ui:widget": "checkbox",
|
||||
"ui:help": "📊 Enable stepped line style instead of curves"
|
||||
},
|
||||
"stacked": {
|
||||
"ui:widget": "checkbox",
|
||||
"ui:help": "📚 Enable stacked Y-axes for multi-axis visualization"
|
||||
},
|
||||
"point_radius": {
|
||||
"ui:widget": "updown",
|
||||
"ui:help": "🔴 Size of data points (0-10)"
|
||||
|
|
|
@ -200,150 +200,6 @@ class PLCDataStreamer:
|
|||
f"Disconnected from PLC {self.config_manager.plc_config['ip']} (stopped recording and streaming)",
|
||||
)
|
||||
|
||||
# Configuration Management Methods
|
||||
def reload_dataset_configuration(self):
|
||||
"""Reload dataset configuration from JSON files and validate CSV headers"""
|
||||
try:
|
||||
self.config_manager.load_datasets()
|
||||
self.config_manager.sync_streaming_variables()
|
||||
|
||||
# 🔍 NEW: Validate CSV headers for active datasets after configuration reload
|
||||
self._validate_csv_headers_after_config_change()
|
||||
|
||||
self.event_logger.log_event(
|
||||
"info",
|
||||
"config_reload",
|
||||
"Dataset configuration reloaded from files with CSV header validation",
|
||||
{
|
||||
"datasets_count": len(self.config_manager.datasets),
|
||||
"active_datasets_count": len(self.config_manager.active_datasets),
|
||||
"csv_recording_active": self.data_streamer.is_csv_recording(),
|
||||
},
|
||||
)
|
||||
if self.logger:
|
||||
self.logger.info(
|
||||
"Dataset configuration reloaded successfully with CSV header validation"
|
||||
)
|
||||
except Exception as e:
|
||||
self.event_logger.log_event(
|
||||
"error",
|
||||
"config_reload_failed",
|
||||
f"Failed to reload dataset configuration: {str(e)}",
|
||||
{"error": str(e)},
|
||||
)
|
||||
if self.logger:
|
||||
self.logger.error(f"Failed to reload dataset configuration: {e}")
|
||||
raise
|
||||
|
||||
def _validate_csv_headers_after_config_change(self):
|
||||
"""Validate CSV headers for all active datasets after configuration changes"""
|
||||
if not self.data_streamer.is_csv_recording():
|
||||
if self.logger:
|
||||
self.logger.debug(
|
||||
"CSV recording not active, skipping header validation"
|
||||
)
|
||||
return
|
||||
|
||||
validated_datasets = []
|
||||
header_mismatches = []
|
||||
|
||||
for dataset_id in self.config_manager.active_datasets:
|
||||
try:
|
||||
# Check if this dataset has an active CSV file
|
||||
if dataset_id not in self.data_streamer.dataset_csv_files:
|
||||
continue
|
||||
|
||||
# Get current CSV file path
|
||||
csv_path = self.data_streamer.get_dataset_csv_file_path(dataset_id)
|
||||
|
||||
if not os.path.exists(csv_path):
|
||||
continue
|
||||
|
||||
# Get expected headers based on current configuration
|
||||
dataset_variables = self.config_manager.get_dataset_variables(
|
||||
dataset_id
|
||||
)
|
||||
expected_headers = ["timestamp"] + list(dataset_variables.keys())
|
||||
|
||||
# Read existing headers from the file
|
||||
existing_headers = self.data_streamer.read_csv_headers(csv_path)
|
||||
|
||||
# Compare headers
|
||||
if existing_headers and not self.data_streamer.compare_headers(
|
||||
existing_headers, expected_headers
|
||||
):
|
||||
# Header mismatch detected - close current file and rename it
|
||||
if dataset_id in self.data_streamer.dataset_csv_files:
|
||||
self.data_streamer.dataset_csv_files[dataset_id].close()
|
||||
del self.data_streamer.dataset_csv_files[dataset_id]
|
||||
del self.data_streamer.dataset_csv_writers[dataset_id]
|
||||
|
||||
# Rename the file with timestamp
|
||||
prefix = self.config_manager.datasets[dataset_id]["prefix"]
|
||||
renamed_path = self.data_streamer.rename_csv_file_with_timestamp(
|
||||
csv_path, prefix
|
||||
)
|
||||
|
||||
header_mismatches.append(
|
||||
{
|
||||
"dataset_id": dataset_id,
|
||||
"dataset_name": self.config_manager.datasets[dataset_id][
|
||||
"name"
|
||||
],
|
||||
"original_file": csv_path,
|
||||
"renamed_file": renamed_path,
|
||||
"expected_headers": expected_headers,
|
||||
"existing_headers": existing_headers,
|
||||
}
|
||||
)
|
||||
|
||||
# Create new file with correct headers (will be done on next write)
|
||||
# The setup_dataset_csv_file method will handle creating the new file
|
||||
|
||||
if self.logger:
|
||||
self.logger.info(
|
||||
f"CSV header mismatch detected for dataset '{self.config_manager.datasets[dataset_id]['name']}' "
|
||||
f"after configuration reload. File renamed: {os.path.basename(csv_path)} -> {os.path.basename(renamed_path)}"
|
||||
)
|
||||
|
||||
validated_datasets.append(
|
||||
{
|
||||
"dataset_id": dataset_id,
|
||||
"dataset_name": self.config_manager.datasets[dataset_id][
|
||||
"name"
|
||||
],
|
||||
"headers_match": len(header_mismatches) == 0
|
||||
or dataset_id
|
||||
not in [h["dataset_id"] for h in header_mismatches],
|
||||
"expected_headers": expected_headers,
|
||||
"existing_headers": existing_headers,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.warning(
|
||||
f"Error validating CSV headers for dataset {dataset_id}: {e}"
|
||||
)
|
||||
|
||||
# Log summary of validation results
|
||||
if header_mismatches:
|
||||
self.event_logger.log_event(
|
||||
"warning",
|
||||
"csv_headers_mismatch_after_config_reload",
|
||||
f"CSV header mismatches detected and resolved for {len(header_mismatches)} datasets after configuration reload",
|
||||
{
|
||||
"mismatched_datasets": len(header_mismatches),
|
||||
"total_validated": len(validated_datasets),
|
||||
"details": header_mismatches,
|
||||
},
|
||||
)
|
||||
else:
|
||||
if validated_datasets and self.logger:
|
||||
self.logger.info(
|
||||
f"CSV headers validated for {len(validated_datasets)} active datasets - all headers match"
|
||||
)
|
||||
|
||||
# Configuration Methods
|
||||
def update_plc_config(self, ip: str, rack: int, slot: int):
|
||||
"""Update PLC configuration"""
|
||||
|
@ -742,11 +598,6 @@ class PLCDataStreamer:
|
|||
"""Get streaming status (backward compatibility)"""
|
||||
return self.data_streamer.is_streaming()
|
||||
|
||||
# Methods for backward compatibility
|
||||
def load_datasets(self):
|
||||
"""Load datasets (backward compatibility - delegates to reload_dataset_configuration)"""
|
||||
self.reload_dataset_configuration()
|
||||
|
||||
# DEPRECATED: save_datasets() method removed
|
||||
# Data is now saved directly from frontend via RJSF and API endpoints
|
||||
# Use reload_dataset_configuration() to reload configuration when needed
|
||||
# Use load_datasets() to reload configuration when needed
|
||||
|
|
140
core/streamer.py
140
core/streamer.py
|
@ -5,9 +5,8 @@ import threading
|
|||
import csv
|
||||
import os
|
||||
import sys
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, Optional, Set, List
|
||||
from typing import Dict, Any, Optional, Set
|
||||
from pathlib import Path
|
||||
from .plot_manager import PlotManager
|
||||
|
||||
|
@ -34,14 +33,6 @@ class DataStreamer:
|
|||
|
||||
Each dataset thread handles both CSV writing and UDP streaming,
|
||||
but UDP transmission is controlled by independent flag.
|
||||
|
||||
🔍 CSV HEADER VALIDATION
|
||||
========================
|
||||
Automatic header validation ensures CSV file integrity:
|
||||
- When opening existing CSV files, headers are compared with current variable configuration
|
||||
- If headers don't match, the existing file is renamed with format: prefix_to_HH_MM_SS.csv
|
||||
- A new CSV file with correct headers is created using the original filename: prefix_HH.csv
|
||||
- This prevents CSV corruption while preserving historical data
|
||||
"""
|
||||
|
||||
def __init__(self, config_manager, plc_client, event_logger, logger=None):
|
||||
|
@ -158,55 +149,8 @@ class DataStreamer:
|
|||
directory = self.get_csv_directory_path()
|
||||
return os.path.join(directory, filename)
|
||||
|
||||
def read_csv_headers(self, file_path: str) -> List[str]:
|
||||
"""Read the header row from an existing CSV file"""
|
||||
try:
|
||||
with open(file_path, "r", newline="", encoding="utf-8") as file:
|
||||
reader = csv.reader(file)
|
||||
headers = next(reader, [])
|
||||
return headers
|
||||
except (IOError, StopIteration) as e:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Could not read headers from {file_path}: {e}")
|
||||
return []
|
||||
|
||||
def compare_headers(
|
||||
self, existing_headers: List[str], new_headers: List[str]
|
||||
) -> bool:
|
||||
"""Compare two header lists for equality"""
|
||||
return existing_headers == new_headers
|
||||
|
||||
def rename_csv_file_with_timestamp(self, original_path: str, prefix: str) -> str:
|
||||
"""Rename CSV file with 'to' timestamp format"""
|
||||
try:
|
||||
directory = os.path.dirname(original_path)
|
||||
timestamp = datetime.now().strftime("%H_%M_%S")
|
||||
new_filename = f"{prefix}_to_{timestamp}.csv"
|
||||
new_path = os.path.join(directory, new_filename)
|
||||
|
||||
# Ensure the new filename is unique
|
||||
counter = 1
|
||||
while os.path.exists(new_path):
|
||||
new_filename = f"{prefix}_to_{timestamp}_{counter}.csv"
|
||||
new_path = os.path.join(directory, new_filename)
|
||||
counter += 1
|
||||
|
||||
shutil.move(original_path, new_path)
|
||||
|
||||
if self.logger:
|
||||
self.logger.info(
|
||||
f"Renamed CSV file due to header mismatch: {os.path.basename(original_path)} -> {os.path.basename(new_path)}"
|
||||
)
|
||||
|
||||
return new_path
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error renaming CSV file {original_path}: {e}")
|
||||
raise
|
||||
|
||||
def setup_dataset_csv_file(self, dataset_id: str):
|
||||
"""Setup CSV file for a specific dataset with header validation"""
|
||||
"""Setup CSV file for a specific dataset"""
|
||||
current_hour = datetime.now().hour
|
||||
|
||||
# If we're using a modification file and the hour hasn't changed, keep using it
|
||||
|
@ -232,72 +176,9 @@ class DataStreamer:
|
|||
self.ensure_csv_directory()
|
||||
csv_path = self.get_dataset_csv_file_path(dataset_id)
|
||||
|
||||
# Get current dataset variables and create expected headers
|
||||
dataset_variables = self.config_manager.get_dataset_variables(dataset_id)
|
||||
expected_headers = ["timestamp"] + list(dataset_variables.keys())
|
||||
|
||||
# Check if file exists and validate headers
|
||||
# Check if file exists to determine if we need headers
|
||||
file_exists = os.path.exists(csv_path)
|
||||
need_to_rename_file = False
|
||||
|
||||
if file_exists and dataset_variables:
|
||||
# Read existing headers from the file
|
||||
existing_headers = self.read_csv_headers(csv_path)
|
||||
|
||||
# Compare headers - if they don't match, we need to rename the old file
|
||||
if existing_headers and not self.compare_headers(
|
||||
existing_headers, expected_headers
|
||||
):
|
||||
need_to_rename_file = True
|
||||
prefix = self.config_manager.datasets[dataset_id]["prefix"]
|
||||
|
||||
if self.logger:
|
||||
self.logger.info(
|
||||
f"Header mismatch detected in CSV file for dataset '{self.config_manager.datasets[dataset_id]['name']}'. "
|
||||
f"Expected: {expected_headers}, Found: {existing_headers}"
|
||||
)
|
||||
|
||||
# Rename the existing file if headers don't match
|
||||
if need_to_rename_file:
|
||||
try:
|
||||
prefix = self.config_manager.datasets[dataset_id]["prefix"]
|
||||
renamed_path = self.rename_csv_file_with_timestamp(csv_path, prefix)
|
||||
|
||||
# Log the rename event
|
||||
self.event_logger.log_event(
|
||||
"info",
|
||||
"csv_file_renamed",
|
||||
f"CSV file renamed due to header mismatch for dataset '{self.config_manager.datasets[dataset_id]['name']}': "
|
||||
f"{os.path.basename(csv_path)} -> {os.path.basename(renamed_path)}",
|
||||
{
|
||||
"dataset_id": dataset_id,
|
||||
"original_file": csv_path,
|
||||
"renamed_file": renamed_path,
|
||||
"expected_headers": expected_headers,
|
||||
"existing_headers": existing_headers,
|
||||
"reason": "header_mismatch",
|
||||
},
|
||||
)
|
||||
|
||||
# File no longer exists after rename, so we'll create a new one
|
||||
file_exists = False
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Failed to rename CSV file {csv_path}: {e}")
|
||||
# Continue with the existing file despite the header mismatch
|
||||
self.event_logger.log_event(
|
||||
"error",
|
||||
"csv_file_rename_failed",
|
||||
f"Failed to rename CSV file for dataset '{self.config_manager.datasets[dataset_id]['name']}': {str(e)}",
|
||||
{
|
||||
"dataset_id": dataset_id,
|
||||
"file_path": csv_path,
|
||||
"error": str(e),
|
||||
},
|
||||
)
|
||||
|
||||
# Open the file for appending (or creating if it doesn't exist)
|
||||
self.dataset_csv_files[dataset_id] = open(
|
||||
csv_path, "a", newline="", encoding="utf-8"
|
||||
)
|
||||
|
@ -309,19 +190,16 @@ class DataStreamer:
|
|||
# Reset modification file flag when creating regular hourly file
|
||||
self.dataset_using_modification_files[dataset_id] = False
|
||||
|
||||
# Write headers if it's a new file or if the file was renamed
|
||||
if (not file_exists or need_to_rename_file) and dataset_variables:
|
||||
self.dataset_csv_writers[dataset_id].writerow(expected_headers)
|
||||
# Write headers if it's a new file
|
||||
dataset_variables = self.config_manager.get_dataset_variables(dataset_id)
|
||||
if not file_exists and dataset_variables:
|
||||
headers = ["timestamp"] + list(dataset_variables.keys())
|
||||
self.dataset_csv_writers[dataset_id].writerow(headers)
|
||||
self.dataset_csv_files[dataset_id].flush()
|
||||
|
||||
if self.logger:
|
||||
action = (
|
||||
"recreated with correct headers"
|
||||
if need_to_rename_file
|
||||
else "created"
|
||||
)
|
||||
self.logger.info(
|
||||
f"CSV file {action} for dataset '{self.config_manager.datasets[dataset_id]['name']}': {csv_path}"
|
||||
f"CSV file created for dataset '{self.config_manager.datasets[dataset_id]['name']}': {csv_path}"
|
||||
)
|
||||
|
||||
def write_dataset_csv_data(self, dataset_id: str, data: Dict[str, Any]):
|
||||
|
|
|
@ -12,7 +12,12 @@
|
|||
<body>
|
||||
<div id="root"></div>
|
||||
|
||||
<!-- Chart.js Libraries now loaded via npm modules -->
|
||||
<!-- Chart.js Libraries - Load in strict order (compatible versions) -->
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js@3.9.1"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/luxon@2"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-luxon@1.3.1"></script>
|
||||
<script src="https://unpkg.com/chartjs-plugin-zoom@1.2.1/dist/chartjs-plugin-zoom.min.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-streaming@2.0.0"></script>
|
||||
|
||||
<script type="module" src="/src/main.jsx"></script>
|
||||
</body>
|
||||
|
|
|
@ -15,12 +15,7 @@
|
|||
"@rjsf/chakra-ui": "^5.24.12",
|
||||
"@rjsf/core": "^5.24.12",
|
||||
"@rjsf/validator-ajv8": "^5.24.12",
|
||||
"chart.js": "^3.9.1",
|
||||
"chartjs-adapter-luxon": "^1.3.1",
|
||||
"chartjs-plugin-streaming": "^2.0.0",
|
||||
"chartjs-plugin-zoom": "^1.2.1",
|
||||
"framer-motion": "^11.2.12",
|
||||
"luxon": "^2.5.2",
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-icons": "^5.5.0",
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 81 KiB |
|
@ -9,7 +9,6 @@ function Home() {
|
|||
<Container py={4}>
|
||||
<Stack spacing={3}>
|
||||
<Heading as="h1" size="md" display="flex" alignItems="center" gap={2}>
|
||||
<img src="/SIDEL.png" alt="SIDEL" style={{ height: 32 }} />
|
||||
<img src={recLogo} alt="REC" style={{ height: 28 }} />
|
||||
PLC S7-31x Streamer & Logger (React)
|
||||
</Heading>
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -30,16 +30,9 @@ import {
|
|||
Slider,
|
||||
SliderTrack,
|
||||
SliderFilledTrack,
|
||||
SliderThumb,
|
||||
Modal,
|
||||
ModalOverlay,
|
||||
ModalContent,
|
||||
ModalHeader,
|
||||
ModalCloseButton,
|
||||
ModalBody,
|
||||
useDisclosure,
|
||||
SliderThumb
|
||||
} from '@chakra-ui/react'
|
||||
import { SettingsIcon, RepeatIcon, ViewIcon } from '@chakra-ui/icons'
|
||||
import { SettingsIcon, RepeatIcon } from '@chakra-ui/icons'
|
||||
import ChartjsPlot from './ChartjsPlot.jsx'
|
||||
import * as api from '../services/api'
|
||||
|
||||
|
@ -64,7 +57,6 @@ export default function PlotRealtimeSession({
|
|||
|
||||
const [showSettings, setShowSettings] = useState(false)
|
||||
const [isRefreshing, setIsRefreshing] = useState(false)
|
||||
const { isOpen: isFullscreen, onOpen: openFullscreen, onClose: closeFullscreen } = useDisclosure()
|
||||
const [localConfig, setLocalConfig] = useState({
|
||||
time_window: plotDefinition.time_window || 60,
|
||||
y_min: plotDefinition.y_min,
|
||||
|
@ -75,7 +67,6 @@ export default function PlotRealtimeSession({
|
|||
// Visual style properties
|
||||
line_tension: plotDefinition.line_tension !== undefined ? plotDefinition.line_tension : 0.4,
|
||||
stepped: plotDefinition.stepped || false,
|
||||
stacked: plotDefinition.stacked || false,
|
||||
point_radius: plotDefinition.point_radius !== undefined ? plotDefinition.point_radius : 1,
|
||||
point_hover_radius: plotDefinition.point_hover_radius !== undefined ? plotDefinition.point_hover_radius : 4
|
||||
})
|
||||
|
@ -87,27 +78,6 @@ export default function PlotRealtimeSession({
|
|||
// Track if we're in the middle of applying changes to avoid conflicts
|
||||
const applyingChangesRef = useRef(false)
|
||||
|
||||
// Handle fullscreen resize - force chart resize when modal opens/closes
|
||||
useEffect(() => {
|
||||
if (isFullscreen && chartControlsRef.current) {
|
||||
// Delay to ensure modal is fully rendered
|
||||
const timer = setTimeout(() => {
|
||||
if (chartControlsRef.current?.refreshConfiguration) {
|
||||
chartControlsRef.current.refreshConfiguration()
|
||||
}
|
||||
// Also try to trigger a window resize event to force Chart.js to recalculate
|
||||
window.dispatchEvent(new Event('resize'))
|
||||
}, 200)
|
||||
return () => clearTimeout(timer)
|
||||
} else if (!isFullscreen && chartControlsRef.current) {
|
||||
// When exiting fullscreen, also trigger resize
|
||||
const timer = setTimeout(() => {
|
||||
window.dispatchEvent(new Event('resize'))
|
||||
}, 100)
|
||||
return () => clearTimeout(timer)
|
||||
}
|
||||
}, [isFullscreen])
|
||||
|
||||
// Update localConfig when plotDefinition changes (but not during our own updates)
|
||||
useEffect(() => {
|
||||
if (!applyingChangesRef.current) {
|
||||
|
@ -121,7 +91,6 @@ export default function PlotRealtimeSession({
|
|||
// Visual style properties
|
||||
line_tension: plotDefinition.line_tension !== undefined ? plotDefinition.line_tension : 0.4,
|
||||
stepped: plotDefinition.stepped || false,
|
||||
stacked: plotDefinition.stacked || false,
|
||||
point_radius: plotDefinition.point_radius !== undefined ? plotDefinition.point_radius : 1,
|
||||
point_hover_radius: plotDefinition.point_hover_radius !== undefined ? plotDefinition.point_hover_radius : 4
|
||||
})
|
||||
|
@ -131,7 +100,6 @@ export default function PlotRealtimeSession({
|
|||
const cardBg = useColorModeValue('white', 'gray.700')
|
||||
const borderColor = useColorModeValue('gray.200', 'gray.600')
|
||||
const muted = useColorModeValue('gray.600', 'gray.300')
|
||||
const settingsBg = useColorModeValue('gray.50', 'gray.600')
|
||||
|
||||
// Enhanced session object for ChartjsPlot - memoized to prevent recreations
|
||||
const enhancedSession = useMemo(() => ({
|
||||
|
@ -140,7 +108,6 @@ export default function PlotRealtimeSession({
|
|||
is_active: session.is_active,
|
||||
is_paused: session.is_paused,
|
||||
variables_count: plotVariables.length,
|
||||
isFullscreen: isFullscreen,
|
||||
config: {
|
||||
...plotDefinition,
|
||||
...localConfig,
|
||||
|
@ -156,8 +123,7 @@ export default function PlotRealtimeSession({
|
|||
session.is_active,
|
||||
session.is_paused,
|
||||
plotVariables,
|
||||
localConfig,
|
||||
isFullscreen
|
||||
localConfig
|
||||
])
|
||||
|
||||
// Load session status from backend (optional - session may not exist until started)
|
||||
|
@ -353,7 +319,6 @@ export default function PlotRealtimeSession({
|
|||
// Reset visual style properties to defaults
|
||||
line_tension: plotDefinition.line_tension !== undefined ? plotDefinition.line_tension : 0.4,
|
||||
stepped: plotDefinition.stepped || false,
|
||||
stacked: plotDefinition.stacked || false,
|
||||
point_radius: plotDefinition.point_radius !== undefined ? plotDefinition.point_radius : 1,
|
||||
point_hover_radius: plotDefinition.point_hover_radius !== undefined ? plotDefinition.point_hover_radius : 4
|
||||
})
|
||||
|
@ -465,15 +430,6 @@ export default function PlotRealtimeSession({
|
|||
</Box>
|
||||
<Spacer />
|
||||
<HStack>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="outline"
|
||||
onClick={openFullscreen}
|
||||
colorScheme="blue"
|
||||
leftIcon={<ViewIcon />}
|
||||
>
|
||||
Fullscreen
|
||||
</Button>
|
||||
<IconButton
|
||||
icon={<RepeatIcon />}
|
||||
size="sm"
|
||||
|
@ -505,7 +461,7 @@ export default function PlotRealtimeSession({
|
|||
<CardBody pt={0}>
|
||||
{/* Settings Panel */}
|
||||
{showSettings && (
|
||||
<Box mb={4} p={4} bg={settingsBg} borderRadius="md">
|
||||
<Box mb={4} p={4} bg={useColorModeValue('gray.50', 'gray.600')} borderRadius="md">
|
||||
<VStack spacing={4} align="stretch">
|
||||
{/* Basic Plot Settings */}
|
||||
<Box>
|
||||
|
@ -637,25 +593,6 @@ export default function PlotRealtimeSession({
|
|||
</FormControl>
|
||||
</GridItem>
|
||||
|
||||
<GridItem>
|
||||
<FormControl>
|
||||
<FormLabel fontSize="sm">Stacked Y-Axes</FormLabel>
|
||||
<Text fontSize="xs" color="gray.500" mb={2}>
|
||||
Multi-axis visualization with separate Y scales
|
||||
</Text>
|
||||
<Checkbox
|
||||
isChecked={localConfig.stacked}
|
||||
onChange={(e) => setLocalConfig(prev => ({
|
||||
...prev,
|
||||
stacked: e.target.checked
|
||||
}))}
|
||||
size="sm"
|
||||
>
|
||||
Stacked Mode
|
||||
</Checkbox>
|
||||
</FormControl>
|
||||
</GridItem>
|
||||
|
||||
<GridItem>
|
||||
<FormControl>
|
||||
<FormLabel fontSize="sm">Point Size: {localConfig.point_radius}px</FormLabel>
|
||||
|
@ -758,82 +695,8 @@ export default function PlotRealtimeSession({
|
|||
>
|
||||
⏹️ Stop
|
||||
</Button>
|
||||
<Spacer />
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={openFullscreen}
|
||||
colorScheme="blue"
|
||||
variant="solid"
|
||||
leftIcon={<ViewIcon />}
|
||||
>
|
||||
Fullscreen
|
||||
</Button>
|
||||
</HStack>
|
||||
</CardBody>
|
||||
|
||||
{/* Fullscreen Modal */}
|
||||
<Modal isOpen={isFullscreen} onClose={closeFullscreen} size="full">
|
||||
<ModalOverlay bg="blackAlpha.800" />
|
||||
<ModalContent bg={cardBg} m={0} borderRadius={0} h="100vh">
|
||||
<ModalHeader>
|
||||
<HStack>
|
||||
<Text>📈 {plotDefinition.name} - Fullscreen Mode</Text>
|
||||
<Spacer />
|
||||
<Text fontSize="sm" color={muted}>
|
||||
Zoom: Drag to select area | Pan: Shift + Drag | Double-click to reset
|
||||
</Text>
|
||||
</HStack>
|
||||
</ModalHeader>
|
||||
<ModalCloseButton size="lg" />
|
||||
<ModalBody p={4} h="calc(100vh - 80px)" display="flex" flexDirection="column">
|
||||
<Box flex="1" w="100%" minH={0}>
|
||||
<ChartjsPlot session={enhancedSession} height="100%" />
|
||||
</Box>
|
||||
<HStack spacing={2} mt={4} justify="center">
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => handleControlClick('start')}
|
||||
colorScheme="green"
|
||||
isDisabled={session.is_active && !session.is_paused}
|
||||
>
|
||||
▶️ Start
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => handleControlClick('pause')}
|
||||
colorScheme="yellow"
|
||||
isDisabled={!session.is_active || session.is_paused}
|
||||
>
|
||||
⏸️ Pause
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => handleControlClick('clear')}
|
||||
variant="outline"
|
||||
>
|
||||
🗑️ Clear
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => handleControlClick('stop')}
|
||||
colorScheme="red"
|
||||
isDisabled={!session.is_active}
|
||||
>
|
||||
⏹️ Stop
|
||||
</Button>
|
||||
{chartControlsRef.current && (
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => chartControlsRef.current.resetZoom?.()}
|
||||
variant="outline"
|
||||
>
|
||||
🔄 Reset Zoom
|
||||
</Button>
|
||||
)}
|
||||
</HStack>
|
||||
</ModalBody>
|
||||
</ModalContent>
|
||||
</Modal>
|
||||
</Card>
|
||||
)
|
||||
}
|
||||
|
|
|
@ -14,15 +14,8 @@ import {
|
|||
IconButton,
|
||||
Divider,
|
||||
Spacer,
|
||||
Modal,
|
||||
ModalOverlay,
|
||||
ModalContent,
|
||||
ModalHeader,
|
||||
ModalCloseButton,
|
||||
ModalBody,
|
||||
useDisclosure,
|
||||
} from '@chakra-ui/react'
|
||||
import { EditIcon, SettingsIcon, DeleteIcon, ViewIcon } from '@chakra-ui/icons'
|
||||
import { EditIcon, SettingsIcon, DeleteIcon } from '@chakra-ui/icons'
|
||||
import ChartjsPlot from './ChartjsPlot.jsx'
|
||||
|
||||
export default function PlotRealtimeViewer() {
|
||||
|
@ -152,7 +145,6 @@ function PlotRealtimeCard({ session, onControl, onRefresh }) {
|
|||
const borderColor = useColorModeValue('gray.200', 'gray.600')
|
||||
const muted = useColorModeValue('gray.600', 'gray.300')
|
||||
const chartControlsRef = useRef(null)
|
||||
const { isOpen: isFullscreen, onOpen: openFullscreen, onClose: closeFullscreen } = useDisclosure()
|
||||
|
||||
const handleChartReady = (controls) => {
|
||||
chartControlsRef.current = controls
|
||||
|
@ -161,7 +153,6 @@ function PlotRealtimeCard({ session, onControl, onRefresh }) {
|
|||
const enhancedSession = {
|
||||
...session,
|
||||
onChartReady: handleChartReady,
|
||||
isFullscreen: isFullscreen,
|
||||
}
|
||||
|
||||
const handleControlClick = async (action) => {
|
||||
|
@ -189,12 +180,7 @@ function PlotRealtimeCard({ session, onControl, onRefresh }) {
|
|||
return (
|
||||
<Card bg={cardBg} borderColor={borderColor}>
|
||||
<CardHeader>
|
||||
<FlexHeader
|
||||
session={session}
|
||||
muted={muted}
|
||||
onRefresh={() => onRefresh(session.session_id)}
|
||||
onFullscreen={openFullscreen}
|
||||
/>
|
||||
<FlexHeader session={session} muted={muted} onRefresh={() => onRefresh(session.session_id)} />
|
||||
</CardHeader>
|
||||
<CardBody>
|
||||
<ChartjsPlot session={enhancedSession} height="360px" />
|
||||
|
@ -203,59 +189,13 @@ function PlotRealtimeCard({ session, onControl, onRefresh }) {
|
|||
<Button size="sm" onClick={() => handleControlClick('pause')} colorScheme="yellow">⏸️ Pause</Button>
|
||||
<Button size="sm" onClick={() => handleControlClick('clear')} variant="outline">🗑️ Clear</Button>
|
||||
<Button size="sm" onClick={() => handleControlClick('stop')} colorScheme="red">⏹️ Stop</Button>
|
||||
<Spacer />
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={openFullscreen}
|
||||
colorScheme="blue"
|
||||
variant="solid"
|
||||
>
|
||||
🔍 Fullscreen
|
||||
</Button>
|
||||
</HStack>
|
||||
</CardBody>
|
||||
|
||||
{/* Fullscreen Modal */}
|
||||
<Modal isOpen={isFullscreen} onClose={closeFullscreen} size="full">
|
||||
<ModalOverlay bg="blackAlpha.800" />
|
||||
<ModalContent bg={cardBg} m={0} borderRadius={0}>
|
||||
<ModalHeader>
|
||||
<HStack>
|
||||
<Text>📈 {session.name || session.session_id} - Fullscreen Mode</Text>
|
||||
<Spacer />
|
||||
<Text fontSize="sm" color={muted}>
|
||||
Zoom: Drag to select area | Pan: Shift + Drag | Double-click to reset
|
||||
</Text>
|
||||
</HStack>
|
||||
</ModalHeader>
|
||||
<ModalCloseButton size="lg" />
|
||||
<ModalBody p={4}>
|
||||
<VStack spacing={4} h="full">
|
||||
<ChartjsPlot session={enhancedSession} height="calc(100vh - 120px)" />
|
||||
<HStack spacing={2}>
|
||||
<Button size="sm" onClick={() => handleControlClick('start')} colorScheme="green">▶️ Start</Button>
|
||||
<Button size="sm" onClick={() => handleControlClick('pause')} colorScheme="yellow">⏸️ Pause</Button>
|
||||
<Button size="sm" onClick={() => handleControlClick('clear')} variant="outline">🗑️ Clear</Button>
|
||||
<Button size="sm" onClick={() => handleControlClick('stop')} colorScheme="red">⏹️ Stop</Button>
|
||||
{chartControlsRef.current && (
|
||||
<Button
|
||||
size="sm"
|
||||
onClick={() => chartControlsRef.current.resetZoom?.()}
|
||||
variant="outline"
|
||||
>
|
||||
🔄 Reset Zoom
|
||||
</Button>
|
||||
)}
|
||||
</HStack>
|
||||
</VStack>
|
||||
</ModalBody>
|
||||
</ModalContent>
|
||||
</Modal>
|
||||
</Card>
|
||||
)
|
||||
}
|
||||
|
||||
function FlexHeader({ session, muted, onRefresh, onFullscreen }) {
|
||||
function FlexHeader({ session, muted, onRefresh }) {
|
||||
return (
|
||||
<HStack align="center">
|
||||
<Box>
|
||||
|
@ -266,21 +206,7 @@ function FlexHeader({ session, muted, onRefresh, onFullscreen }) {
|
|||
</Box>
|
||||
<Spacer />
|
||||
<HStack>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="outline"
|
||||
onClick={onFullscreen}
|
||||
colorScheme="blue"
|
||||
>
|
||||
🔍 Fullscreen
|
||||
</Button>
|
||||
<IconButton
|
||||
icon={<SettingsIcon />}
|
||||
size="sm"
|
||||
variant="outline"
|
||||
aria-label="Refresh status"
|
||||
onClick={onRefresh}
|
||||
/>
|
||||
<IconButton icon={<SettingsIcon />} size="sm" variant="outline" aria-label="Refresh status" onClick={onRefresh} />
|
||||
</HStack>
|
||||
</HStack>
|
||||
)
|
||||
|
|
|
@ -5,9 +5,6 @@ import { BrowserRouter } from 'react-router-dom'
|
|||
import { ChakraProvider, ColorModeScript } from '@chakra-ui/react'
|
||||
import theme from './theme.js'
|
||||
|
||||
// Initialize Chart.js for offline usage
|
||||
import './utils/chartSetup.js'
|
||||
|
||||
createRoot(document.getElementById('root')).render(
|
||||
<React.StrictMode>
|
||||
<ColorModeScript initialColorMode={theme.config.initialColorMode} />
|
||||
|
|
|
@ -1133,10 +1133,7 @@ function DashboardContent() {
|
|||
<Container maxW="container.xl" py={6}>
|
||||
<VStack spacing={6} align="stretch">
|
||||
<Flex align="center" mb={4}>
|
||||
<Heading size="xl" display="flex" alignItems="center" gap={3}>
|
||||
<img src="/SIDEL.png" alt="SIDEL" style={{ height: 40 }} />
|
||||
🏭 PLC S7-31x Streamer & Logger
|
||||
</Heading>
|
||||
<Heading size="xl">🏭 PLC S7-31x Streamer & Logger</Heading>
|
||||
<Spacer />
|
||||
<Button size="sm" variant="outline" onClick={loadStatus}>
|
||||
🔄 Refresh Status
|
||||
|
|
|
@ -200,22 +200,6 @@ export async function getPlotVariables() {
|
|||
return toJsonOrThrow(res)
|
||||
}
|
||||
|
||||
// Historical data loading
|
||||
export async function getHistoricalData(variables, timeWindowSeconds) {
|
||||
const res = await fetch(`${BASE_URL}/api/plots/historical`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
variables: variables,
|
||||
time_window: timeWindowSeconds
|
||||
})
|
||||
})
|
||||
return toJsonOrThrow(res)
|
||||
}
|
||||
|
||||
// Plot session status and control (aliases for existing functions)
|
||||
export async function getPlotSession(sessionId) {
|
||||
// Use existing getPlotConfig to get session info
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
/**
|
||||
* Chart.js Setup for Offline Usage
|
||||
* Replaces CDN dependencies with npm modules
|
||||
*/
|
||||
|
||||
// Import Chart.js and all required plugins
|
||||
import {
|
||||
Chart as ChartJS,
|
||||
CategoryScale,
|
||||
LinearScale,
|
||||
PointElement,
|
||||
LineElement,
|
||||
LineController,
|
||||
BarController,
|
||||
BarElement,
|
||||
Title,
|
||||
Tooltip,
|
||||
Legend,
|
||||
TimeScale,
|
||||
TimeSeriesScale,
|
||||
Filler
|
||||
} from 'chart.js';
|
||||
|
||||
// Import time adapter
|
||||
import 'chartjs-adapter-luxon';
|
||||
|
||||
// Import plugins
|
||||
import zoomPlugin from 'chartjs-plugin-zoom';
|
||||
import streamingPlugin from 'chartjs-plugin-streaming';
|
||||
|
||||
// Register all Chart.js components
|
||||
ChartJS.register(
|
||||
CategoryScale,
|
||||
LinearScale,
|
||||
PointElement,
|
||||
LineElement,
|
||||
LineController,
|
||||
BarController,
|
||||
BarElement,
|
||||
Title,
|
||||
Tooltip,
|
||||
Legend,
|
||||
TimeScale,
|
||||
TimeSeriesScale,
|
||||
Filler,
|
||||
zoomPlugin,
|
||||
streamingPlugin
|
||||
);
|
||||
|
||||
// Make Chart.js available globally for existing components
|
||||
window.Chart = ChartJS;
|
||||
|
||||
// Export for ES module usage
|
||||
export default ChartJS;
|
||||
export { ChartJS };
|
||||
|
||||
// Initialize Chart.js setup
|
||||
console.log('📊 Chart.js setup complete - all plugins registered');
|
362
main.py
362
main.py
|
@ -208,32 +208,13 @@ def write_config(config_id):
|
|||
# Write the data
|
||||
json_manager.write_json(config_id, payload)
|
||||
|
||||
# Automatically reload backend configuration for specific config types
|
||||
if streamer:
|
||||
# Notify backend to reload if it's PLC config
|
||||
if config_id == "plc" and streamer:
|
||||
try:
|
||||
if config_id == "plc":
|
||||
streamer.config_manager.load_configuration()
|
||||
elif config_id in ["dataset-definitions", "dataset-variables"]:
|
||||
# Reload dataset configuration to pick up changes
|
||||
streamer.reload_dataset_configuration()
|
||||
|
||||
if streamer.logger:
|
||||
streamer.logger.info(
|
||||
f"Auto-reloaded backend configuration after updating {config_id}"
|
||||
)
|
||||
elif config_id in ["plot-definitions", "plot-variables"]:
|
||||
# Plot configurations don't need backend reload currently
|
||||
pass
|
||||
streamer.config_manager.load_configuration()
|
||||
except Exception as e:
|
||||
# Log the error but don't fail the save operation
|
||||
if streamer and streamer.logger:
|
||||
streamer.logger.warning(
|
||||
f"Could not auto-reload {config_id} config in backend: {e}"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"Warning: Could not auto-reload {config_id} config in backend: {e}"
|
||||
)
|
||||
print(f"Warning: Could not reload config in backend: {e}")
|
||||
|
||||
return jsonify(
|
||||
{
|
||||
|
@ -279,8 +260,8 @@ def reload_config(config_id):
|
|||
if config_id == "plc":
|
||||
streamer.config_manager.load_configuration()
|
||||
elif config_id in ["dataset-definitions", "dataset-variables"]:
|
||||
# Reload dataset configuration using the new method
|
||||
streamer.reload_dataset_configuration()
|
||||
# Reload dataset configuration
|
||||
streamer.load_datasets()
|
||||
elif config_id in ["plot-definitions", "plot-variables"]:
|
||||
# Reload plot configuration if needed
|
||||
pass
|
||||
|
@ -1821,337 +1802,6 @@ def get_plot_variables():
|
|||
return jsonify({"error": str(e)}), 500
|
||||
|
||||
|
||||
@app.route("/api/plots/historical", methods=["POST"])
|
||||
def get_historical_data():
|
||||
"""Get historical data from CSV files for plot initialization"""
|
||||
print("🔍 DEBUG: Historical endpoint called")
|
||||
try:
|
||||
data = request.get_json()
|
||||
print(f"🔍 DEBUG: Request data: {data}")
|
||||
|
||||
if not data:
|
||||
print("❌ DEBUG: No data provided")
|
||||
return jsonify({"error": "No data provided"}), 400
|
||||
|
||||
variables = data.get("variables", [])
|
||||
time_window_seconds = data.get("time_window", 60)
|
||||
|
||||
print(f"🔍 DEBUG: Variables: {variables}")
|
||||
print(f"🔍 DEBUG: Time window: {time_window_seconds}")
|
||||
|
||||
if not variables:
|
||||
print("❌ DEBUG: No variables specified")
|
||||
return jsonify({"error": "No variables specified"}), 400
|
||||
|
||||
# Import pandas and glob (datetime already imported globally)
|
||||
try:
|
||||
print("🔍 DEBUG: Importing pandas...")
|
||||
import pandas as pd
|
||||
|
||||
print("🔍 DEBUG: Importing glob...")
|
||||
import glob
|
||||
|
||||
print("🔍 DEBUG: Importing timedelta...")
|
||||
from datetime import timedelta
|
||||
|
||||
print("🔍 DEBUG: All imports successful")
|
||||
except ImportError as e:
|
||||
print(f"❌ DEBUG: Import failed: {e}")
|
||||
return jsonify({"error": f"pandas import failed: {str(e)}"}), 500
|
||||
except Exception as e:
|
||||
print(f"❌ DEBUG: Unexpected import error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
return jsonify({"error": f"Import error: {str(e)}"}), 500
|
||||
|
||||
# Calculate time range
|
||||
try:
|
||||
print("🔍 DEBUG: Calculating time range...")
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(seconds=time_window_seconds)
|
||||
print(f"🔍 DEBUG: Time range calculated: {start_time} to {end_time}")
|
||||
except Exception as e:
|
||||
print(f"❌ DEBUG: Time calculation error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
return jsonify({"error": f"Time calculation failed: {str(e)}"}), 500
|
||||
|
||||
# Get records directory
|
||||
try:
|
||||
print("🔍 DEBUG: Getting records directory...")
|
||||
records_dir = os.path.join(os.path.dirname(__file__), "records")
|
||||
print(f"🔍 DEBUG: Records directory: {records_dir}")
|
||||
print(f"🔍 DEBUG: Records dir exists: {os.path.exists(records_dir)}")
|
||||
|
||||
if not os.path.exists(records_dir):
|
||||
print("🔍 DEBUG: Records directory not found, returning empty data")
|
||||
return jsonify({"data": []})
|
||||
|
||||
historical_data = []
|
||||
|
||||
# Get date folders to search (today and yesterday in case time window spans days)
|
||||
print("🔍 DEBUG: Calculating date folders...")
|
||||
today = end_time.strftime("%d-%m-%Y")
|
||||
yesterday = (end_time - timedelta(days=1)).strftime("%d-%m-%Y")
|
||||
print(f"🔍 DEBUG: Searching dates: {yesterday}, {today}")
|
||||
except Exception as e:
|
||||
print(f"❌ DEBUG: Records directory error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
return jsonify({"error": f"Records directory error: {str(e)}"}), 500
|
||||
|
||||
date_folders = []
|
||||
for date_folder in [yesterday, today]:
|
||||
folder_path = os.path.join(records_dir, date_folder)
|
||||
print(
|
||||
f"🔍 DEBUG: Checking folder: {folder_path}, exists: {os.path.exists(folder_path)}"
|
||||
)
|
||||
if os.path.exists(folder_path):
|
||||
date_folders.append(folder_path)
|
||||
|
||||
print(f"🔍 DEBUG: Found date folders: {date_folders}")
|
||||
|
||||
# Search for CSV files with any of the required variables
|
||||
for folder_path in date_folders:
|
||||
csv_files = glob.glob(os.path.join(folder_path, "*.csv"))
|
||||
print(f"🔍 DEBUG: CSV files in {folder_path}: {csv_files}")
|
||||
|
||||
for csv_file in csv_files:
|
||||
try:
|
||||
print(f"🔍 DEBUG: Processing CSV file: {csv_file}")
|
||||
|
||||
# Read first line to check if any required variables are present with proper encoding handling
|
||||
header_line = None
|
||||
for encoding in ["utf-8", "utf-8-sig", "utf-16", "latin-1"]:
|
||||
try:
|
||||
with open(csv_file, "r", encoding=encoding) as f:
|
||||
header_line = f.readline().strip()
|
||||
if header_line:
|
||||
print(
|
||||
f"🔍 DEBUG: Successfully read header with {encoding} encoding"
|
||||
)
|
||||
break
|
||||
except (UnicodeDecodeError, UnicodeError):
|
||||
continue
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if not header_line:
|
||||
print(
|
||||
f"🔍 DEBUG: Could not read header from {csv_file}, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
# Clean header line from BOM and normalize
|
||||
header_line = header_line.replace("\ufeff", "").replace("\x00", "")
|
||||
headers = [
|
||||
h.strip().replace("\x00", "") for h in header_line.split(",")
|
||||
]
|
||||
# Clean any remaining unicode artifacts
|
||||
headers = [
|
||||
h
|
||||
for h in headers
|
||||
if h and len(h.replace("\x00", "").strip()) > 0
|
||||
]
|
||||
print(f"🔍 DEBUG: Headers in {csv_file}: {headers}")
|
||||
|
||||
# Check if any of our variables are in this file
|
||||
matching_vars = [var for var in variables if var in headers]
|
||||
print(f"🔍 DEBUG: Matching variables: {matching_vars}")
|
||||
|
||||
if not matching_vars:
|
||||
print(
|
||||
f"🔍 DEBUG: No matching variables in {csv_file}, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
# Read the CSV file with proper encoding and error handling
|
||||
print(f"🔍 DEBUG: Reading CSV file with pandas...")
|
||||
df = None
|
||||
for encoding in ["utf-8", "utf-8-sig", "utf-16", "latin-1"]:
|
||||
try:
|
||||
df = pd.read_csv(
|
||||
csv_file, encoding=encoding, on_bad_lines="skip"
|
||||
)
|
||||
print(
|
||||
f"🔍 DEBUG: CSV loaded with {encoding}, shape: {df.shape}"
|
||||
)
|
||||
break
|
||||
except (
|
||||
UnicodeDecodeError,
|
||||
UnicodeError,
|
||||
pd.errors.ParserError,
|
||||
):
|
||||
continue
|
||||
except Exception as e:
|
||||
print(f"🔍 DEBUG: Error reading with {encoding}: {e}")
|
||||
continue
|
||||
|
||||
if df is None:
|
||||
print(
|
||||
f"Warning: Could not read CSV file {csv_file} with any encoding"
|
||||
)
|
||||
continue
|
||||
|
||||
# Clean column names from BOM and unicode artifacts
|
||||
df.columns = [
|
||||
col.replace("\ufeff", "").replace("\x00", "").strip()
|
||||
for col in df.columns
|
||||
]
|
||||
|
||||
# Convert timestamp to datetime with flexible parsing
|
||||
print(f"🔍 DEBUG: Converting timestamps...")
|
||||
timestamp_col = None
|
||||
for col in df.columns:
|
||||
if "timestamp" in col.lower():
|
||||
timestamp_col = col
|
||||
break
|
||||
|
||||
if timestamp_col is None:
|
||||
print(
|
||||
f"🔍 DEBUG: No timestamp column found in {csv_file}, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
# Try multiple timestamp formats
|
||||
df[timestamp_col] = pd.to_datetime(
|
||||
df[timestamp_col], errors="coerce"
|
||||
)
|
||||
# Remove rows with invalid timestamps
|
||||
df = df.dropna(subset=[timestamp_col])
|
||||
|
||||
if df.empty:
|
||||
print(
|
||||
f"🔍 DEBUG: No valid timestamps in {csv_file}, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
# Normalize column name to 'timestamp'
|
||||
if timestamp_col != "timestamp":
|
||||
df = df.rename(columns={timestamp_col: "timestamp"})
|
||||
|
||||
print(
|
||||
f"🔍 DEBUG: Timestamp range: {df['timestamp'].min()} to {df['timestamp'].max()}"
|
||||
)
|
||||
print(f"🔍 DEBUG: Filter range: {start_time} to {end_time}")
|
||||
except Exception as e:
|
||||
print(
|
||||
f"🔍 DEBUG: Timestamp conversion failed for {csv_file}: {e}"
|
||||
)
|
||||
continue
|
||||
|
||||
# Recalculate matching variables after column cleaning
|
||||
clean_headers = list(df.columns)
|
||||
matching_vars = [var for var in variables if var in clean_headers]
|
||||
print(
|
||||
f"🔍 DEBUG: Matching variables after cleaning: {matching_vars}"
|
||||
)
|
||||
|
||||
if not matching_vars:
|
||||
print(
|
||||
f"🔍 DEBUG: No matching variables after cleaning in {csv_file}, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
# Filter by time range
|
||||
mask = (df["timestamp"] >= start_time) & (
|
||||
df["timestamp"] <= end_time
|
||||
)
|
||||
filtered_df = df[mask]
|
||||
print(f"🔍 DEBUG: Filtered dataframe shape: {filtered_df.shape}")
|
||||
|
||||
if filtered_df.empty:
|
||||
print(f"🔍 DEBUG: No data in time range for {csv_file}")
|
||||
continue
|
||||
|
||||
# Extract data for matching variables only
|
||||
print(f"🔍 DEBUG: Extracting data for variables: {matching_vars}")
|
||||
for _, row in filtered_df.iterrows():
|
||||
timestamp = row["timestamp"]
|
||||
for var in matching_vars:
|
||||
if var in row and pd.notna(row[var]):
|
||||
try:
|
||||
# Convert value to appropriate type
|
||||
value = row[var]
|
||||
|
||||
# Handle boolean values
|
||||
if isinstance(value, str):
|
||||
value_lower = value.lower().strip()
|
||||
if value_lower == "true":
|
||||
value = True
|
||||
elif value_lower == "false":
|
||||
value = False
|
||||
else:
|
||||
try:
|
||||
value = float(value)
|
||||
except ValueError:
|
||||
continue
|
||||
elif isinstance(value, (int, float)):
|
||||
value = float(value)
|
||||
else:
|
||||
continue
|
||||
|
||||
historical_data.append(
|
||||
{
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"variable": var,
|
||||
"value": value,
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
# Skip invalid values
|
||||
print(
|
||||
f"🔍 DEBUG: Skipping invalid value for {var}: {e}"
|
||||
)
|
||||
continue
|
||||
|
||||
except Exception as e:
|
||||
# Skip files that can't be read
|
||||
print(f"Warning: Could not read CSV file {csv_file}: {e}")
|
||||
continue
|
||||
|
||||
# Sort by timestamp
|
||||
historical_data.sort(key=lambda x: x["timestamp"])
|
||||
|
||||
print(f"🔍 DEBUG: Total historical data points found: {len(historical_data)}")
|
||||
print(
|
||||
f"🔍 DEBUG: Variables found: {list(set([item['variable'] for item in historical_data]))}"
|
||||
)
|
||||
|
||||
return jsonify(
|
||||
{
|
||||
"data": historical_data,
|
||||
"time_range": {
|
||||
"start": start_time.isoformat(),
|
||||
"end": end_time.isoformat(),
|
||||
},
|
||||
"variables_found": list(
|
||||
set([item["variable"] for item in historical_data])
|
||||
),
|
||||
"total_points": len(historical_data),
|
||||
}
|
||||
)
|
||||
|
||||
except ImportError as e:
|
||||
return (
|
||||
jsonify(
|
||||
{
|
||||
"error": f"pandas is required for historical data processing: {str(e)}"
|
||||
}
|
||||
),
|
||||
500,
|
||||
)
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
return jsonify({"error": f"Internal server error: {str(e)}"}), 500
|
||||
|
||||
|
||||
@app.route("/api/status")
|
||||
def get_status():
|
||||
"""Get current status"""
|
||||
|
|
|
@ -3,5 +3,4 @@ python-snap7==1.3
|
|||
psutil==5.9.5
|
||||
flask-socketio==5.3.6
|
||||
jsonschema==4.22.0
|
||||
Flask-Cors==4.0.0
|
||||
pandas
|
||||
Flask-Cors==4.0.0
|
|
@ -1,13 +1,13 @@
|
|||
{
|
||||
"last_state": {
|
||||
"should_connect": true,
|
||||
"should_stream": false,
|
||||
"should_stream": true,
|
||||
"active_datasets": [
|
||||
"Test",
|
||||
"Fast",
|
||||
"DAR"
|
||||
"DAR",
|
||||
"Fast"
|
||||
]
|
||||
},
|
||||
"auto_recovery_enabled": true,
|
||||
"last_update": "2025-08-15T13:14:31.493157"
|
||||
"last_update": "2025-08-14T18:32:34.865706"
|
||||
}
|
|
@ -1,129 +0,0 @@
|
|||
"""
|
||||
Test script to validate automatic configuration reloading
|
||||
"""
|
||||
|
||||
import json
|
||||
import requests
|
||||
import time
|
||||
|
||||
# Configuration
|
||||
BASE_URL = "http://localhost:5000"
|
||||
TEST_DATASET_ID = "TestReload"
|
||||
|
||||
|
||||
def test_config_reload():
|
||||
"""Test that backend automatically reloads configuration when datasets are updated"""
|
||||
|
||||
print("🧪 Testing automatic configuration reload...")
|
||||
|
||||
try:
|
||||
# Step 1: Get current dataset definitions
|
||||
print("📖 Reading current dataset definitions...")
|
||||
response = requests.get(f"{BASE_URL}/api/config/dataset-definitions")
|
||||
if not response.ok:
|
||||
print(f"❌ Failed to read dataset definitions: {response.status_code}")
|
||||
return False
|
||||
|
||||
current_config = response.json()
|
||||
datasets = current_config.get("data", {}).get("datasets", [])
|
||||
print(f"Current datasets: {[d.get('id') for d in datasets]}")
|
||||
|
||||
# Step 2: Add a test dataset
|
||||
print(f"➕ Adding test dataset: {TEST_DATASET_ID}")
|
||||
test_dataset = {
|
||||
"id": TEST_DATASET_ID,
|
||||
"name": "Test Reload Dataset",
|
||||
"prefix": "test_reload",
|
||||
"sampling_interval": 1.0,
|
||||
"enabled": False,
|
||||
}
|
||||
|
||||
# Add to datasets list
|
||||
new_datasets = [
|
||||
d for d in datasets if d.get("id") != TEST_DATASET_ID
|
||||
] # Remove if exists
|
||||
new_datasets.append(test_dataset)
|
||||
|
||||
new_config = {
|
||||
"datasets": new_datasets,
|
||||
"version": "1.0",
|
||||
"last_update": f"{time.time()}",
|
||||
}
|
||||
|
||||
# Save configuration
|
||||
response = requests.put(
|
||||
f"{BASE_URL}/api/config/dataset-definitions",
|
||||
headers={"Content-Type": "application/json"},
|
||||
json=new_config,
|
||||
)
|
||||
|
||||
if not response.ok:
|
||||
print(f"❌ Failed to save dataset definitions: {response.status_code}")
|
||||
return False
|
||||
|
||||
print("✅ Dataset definitions saved")
|
||||
|
||||
# Step 3: Check if backend has reloaded the configuration
|
||||
print("🔍 Checking if backend reloaded configuration...")
|
||||
time.sleep(1) # Give backend a moment to reload
|
||||
|
||||
# Get status from backend
|
||||
response = requests.get(f"{BASE_URL}/api/status")
|
||||
if not response.ok:
|
||||
print(f"❌ Failed to get status: {response.status_code}")
|
||||
return False
|
||||
|
||||
status = response.json()
|
||||
backend_datasets = status.get("datasets", {})
|
||||
|
||||
if TEST_DATASET_ID in backend_datasets:
|
||||
print(f"✅ Backend successfully loaded new dataset: {TEST_DATASET_ID}")
|
||||
print(f"Dataset details: {backend_datasets[TEST_DATASET_ID]}")
|
||||
|
||||
# Step 4: Clean up - remove test dataset
|
||||
print("🧹 Cleaning up test dataset...")
|
||||
cleanup_datasets = [
|
||||
d for d in new_datasets if d.get("id") != TEST_DATASET_ID
|
||||
]
|
||||
cleanup_config = {
|
||||
"datasets": cleanup_datasets,
|
||||
"version": "1.0",
|
||||
"last_update": f"{time.time()}",
|
||||
}
|
||||
|
||||
response = requests.put(
|
||||
f"{BASE_URL}/api/config/dataset-definitions",
|
||||
headers={"Content-Type": "application/json"},
|
||||
json=cleanup_config,
|
||||
)
|
||||
|
||||
if response.ok:
|
||||
print("✅ Test dataset cleaned up")
|
||||
else:
|
||||
print(
|
||||
f"⚠️ Warning: Failed to clean up test dataset: {response.status_code}"
|
||||
)
|
||||
|
||||
return True
|
||||
else:
|
||||
print(
|
||||
f"❌ Backend did not reload configuration. Available datasets: {list(backend_datasets.keys())}"
|
||||
)
|
||||
return False
|
||||
|
||||
except requests.exceptions.ConnectionError:
|
||||
print(
|
||||
"❌ Could not connect to backend. Make sure the Flask server is running on http://localhost:5000"
|
||||
)
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ Test failed with error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = test_config_reload()
|
||||
if success:
|
||||
print("\n🎉 Configuration reload test PASSED!")
|
||||
else:
|
||||
print("\n💥 Configuration reload test FAILED!")
|
|
@ -1,105 +0,0 @@
|
|||
"""
|
||||
Test script for CSV header validation functionality
|
||||
"""
|
||||
|
||||
import csv
|
||||
import os
|
||||
import tempfile
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def test_header_validation():
|
||||
"""Test the header validation logic without full system dependencies"""
|
||||
|
||||
# Create a temporary test directory
|
||||
test_dir = tempfile.mkdtemp()
|
||||
|
||||
try:
|
||||
# Test 1: Create a CSV file with old headers
|
||||
old_csv_path = os.path.join(test_dir, "test_data_14.csv")
|
||||
old_headers = ["timestamp", "var1", "var2"]
|
||||
|
||||
with open(old_csv_path, "w", newline="", encoding="utf-8") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(old_headers)
|
||||
writer.writerow(["2025-08-14 12:00:00", "100", "200"])
|
||||
writer.writerow(["2025-08-14 12:00:01", "101", "201"])
|
||||
|
||||
print(f"✅ Created test CSV file: {old_csv_path}")
|
||||
|
||||
# Test 2: Read headers function
|
||||
def read_csv_headers(file_path):
|
||||
try:
|
||||
with open(file_path, "r", newline="", encoding="utf-8") as file:
|
||||
reader = csv.reader(file)
|
||||
headers = next(reader, [])
|
||||
return headers
|
||||
except (IOError, StopIteration) as e:
|
||||
print(f"Could not read headers from {file_path}: {e}")
|
||||
return []
|
||||
|
||||
# Test 3: Compare headers function
|
||||
def compare_headers(existing_headers, new_headers):
|
||||
return existing_headers == new_headers
|
||||
|
||||
# Test 4: Rename file function
|
||||
def rename_csv_file_with_timestamp(original_path, prefix):
|
||||
directory = os.path.dirname(original_path)
|
||||
timestamp = datetime.now().strftime("%H_%M_%S")
|
||||
new_filename = f"{prefix}_to_{timestamp}.csv"
|
||||
new_path = os.path.join(directory, new_filename)
|
||||
|
||||
# Ensure the new filename is unique
|
||||
counter = 1
|
||||
while os.path.exists(new_path):
|
||||
new_filename = f"{prefix}_to_{timestamp}_{counter}.csv"
|
||||
new_path = os.path.join(directory, new_filename)
|
||||
counter += 1
|
||||
|
||||
shutil.move(original_path, new_path)
|
||||
return new_path
|
||||
|
||||
# Test the functions
|
||||
existing_headers = read_csv_headers(old_csv_path)
|
||||
new_headers = ["timestamp", "var1", "var2", "var3"] # Different headers
|
||||
|
||||
print(f"Existing headers: {existing_headers}")
|
||||
print(f"New headers: {new_headers}")
|
||||
print(f"Headers match: {compare_headers(existing_headers, new_headers)}")
|
||||
|
||||
# Test header mismatch scenario
|
||||
if not compare_headers(existing_headers, new_headers):
|
||||
print("❌ Header mismatch detected! Renaming file...")
|
||||
renamed_path = rename_csv_file_with_timestamp(old_csv_path, "test_data")
|
||||
print(f"✅ File renamed to: {os.path.basename(renamed_path)}")
|
||||
|
||||
# Create new file with correct headers
|
||||
new_csv_path = old_csv_path # Same original path
|
||||
with open(new_csv_path, "w", newline="", encoding="utf-8") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(new_headers)
|
||||
writer.writerow(["2025-08-14 12:00:02", "102", "202", "302"])
|
||||
|
||||
print(
|
||||
f"✅ Created new CSV file with correct headers: {os.path.basename(new_csv_path)}"
|
||||
)
|
||||
|
||||
# Verify the files
|
||||
print(f"\nFiles in test directory:")
|
||||
for file in os.listdir(test_dir):
|
||||
if file.endswith(".csv"):
|
||||
file_path = os.path.join(test_dir, file)
|
||||
headers = read_csv_headers(file_path)
|
||||
print(f" {file}: {headers}")
|
||||
|
||||
print("\n✅ All tests passed!")
|
||||
|
||||
finally:
|
||||
# Clean up
|
||||
shutil.rmtree(test_dir)
|
||||
print(f"🧹 Cleaned up test directory: {test_dir}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_header_validation()
|
Binary file not shown.
|
@ -1,336 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
CSV Validator and Cleaner Utility
|
||||
|
||||
This utility helps diagnose and fix common CSV issues:
|
||||
1. Encoding problems (BOM, UTF-16, etc.)
|
||||
2. Inconsistent number of fields
|
||||
3. Malformed timestamps
|
||||
4. Invalid data types
|
||||
|
||||
Usage:
|
||||
python utils/csv_validator.py --scan records/
|
||||
python utils/csv_validator.py --fix records/15-08-2025/problematic_file.csv
|
||||
python utils/csv_validator.py --validate records/15-08-2025/
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import pandas as pd
|
||||
import glob
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class CSVValidator:
|
||||
"""Validates and cleans CSV files for the PLC streaming system"""
|
||||
|
||||
def __init__(self):
|
||||
self.encodings_to_try = ["utf-8", "utf-8-sig", "utf-16", "latin-1", "cp1252"]
|
||||
self.issues_found = []
|
||||
self.fixed_files = []
|
||||
|
||||
def detect_encoding(self, file_path):
|
||||
"""Detect the encoding of a CSV file"""
|
||||
for encoding in self.encodings_to_try:
|
||||
try:
|
||||
with open(file_path, "r", encoding=encoding) as f:
|
||||
f.read(1024) # Read first 1KB
|
||||
return encoding
|
||||
except (UnicodeDecodeError, UnicodeError):
|
||||
continue
|
||||
return None
|
||||
|
||||
def read_csv_headers(self, file_path):
|
||||
"""Read CSV headers with encoding detection"""
|
||||
encoding = self.detect_encoding(file_path)
|
||||
if not encoding:
|
||||
return None, None, "Could not detect encoding"
|
||||
|
||||
try:
|
||||
with open(file_path, "r", encoding=encoding) as f:
|
||||
header_line = f.readline().strip()
|
||||
if not header_line:
|
||||
return None, encoding, "Empty header line"
|
||||
|
||||
# Clean header from BOM and unicode artifacts
|
||||
header_line = header_line.replace("\ufeff", "").replace("\x00", "")
|
||||
headers = [
|
||||
h.strip().replace("\x00", "") for h in header_line.split(",")
|
||||
]
|
||||
headers = [h for h in headers if h and len(h.strip()) > 0]
|
||||
|
||||
return headers, encoding, None
|
||||
except Exception as e:
|
||||
return None, encoding, str(e)
|
||||
|
||||
def validate_csv_structure(self, file_path):
|
||||
"""Validate CSV structure and detect issues"""
|
||||
issues = []
|
||||
|
||||
# Check headers
|
||||
headers, encoding, header_error = self.read_csv_headers(file_path)
|
||||
if header_error:
|
||||
issues.append(
|
||||
{"type": "header_error", "message": header_error, "file": file_path}
|
||||
)
|
||||
return issues
|
||||
|
||||
if not headers:
|
||||
issues.append(
|
||||
{
|
||||
"type": "no_headers",
|
||||
"message": "No valid headers found",
|
||||
"file": file_path,
|
||||
}
|
||||
)
|
||||
return issues
|
||||
|
||||
# Check for encoding issues in headers
|
||||
if any("\x00" in h or "ÿþ" in h or "" in h for h in headers):
|
||||
issues.append(
|
||||
{
|
||||
"type": "encoding_artifacts",
|
||||
"message": f"Headers contain encoding artifacts: {headers}",
|
||||
"file": file_path,
|
||||
"encoding": encoding,
|
||||
}
|
||||
)
|
||||
|
||||
# Try to read full CSV with pandas
|
||||
try:
|
||||
df = None
|
||||
for enc in self.encodings_to_try:
|
||||
try:
|
||||
df = pd.read_csv(file_path, encoding=enc, on_bad_lines="skip")
|
||||
break
|
||||
except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError):
|
||||
continue
|
||||
|
||||
if df is None:
|
||||
issues.append(
|
||||
{
|
||||
"type": "read_error",
|
||||
"message": "Could not read CSV with any encoding",
|
||||
"file": file_path,
|
||||
}
|
||||
)
|
||||
return issues
|
||||
|
||||
# Check for inconsistent columns
|
||||
expected_cols = len(headers)
|
||||
if len(df.columns) != expected_cols:
|
||||
issues.append(
|
||||
{
|
||||
"type": "column_mismatch",
|
||||
"message": f"Expected {expected_cols} columns, found {len(df.columns)}",
|
||||
"file": file_path,
|
||||
"expected_headers": headers,
|
||||
"actual_headers": list(df.columns),
|
||||
}
|
||||
)
|
||||
|
||||
# Check for timestamp column
|
||||
timestamp_cols = [col for col in df.columns if "timestamp" in col.lower()]
|
||||
if not timestamp_cols:
|
||||
issues.append(
|
||||
{
|
||||
"type": "no_timestamp",
|
||||
"message": "No timestamp column found",
|
||||
"file": file_path,
|
||||
"columns": list(df.columns),
|
||||
}
|
||||
)
|
||||
else:
|
||||
# Validate timestamp format
|
||||
timestamp_col = timestamp_cols[0]
|
||||
try:
|
||||
df[timestamp_col] = pd.to_datetime(
|
||||
df[timestamp_col], errors="coerce"
|
||||
)
|
||||
invalid_timestamps = df[timestamp_col].isna().sum()
|
||||
if invalid_timestamps > 0:
|
||||
issues.append(
|
||||
{
|
||||
"type": "invalid_timestamps",
|
||||
"message": f"{invalid_timestamps} invalid timestamps found",
|
||||
"file": file_path,
|
||||
"timestamp_column": timestamp_col,
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
issues.append(
|
||||
{
|
||||
"type": "timestamp_parse_error",
|
||||
"message": f"Cannot parse timestamps: {str(e)}",
|
||||
"file": file_path,
|
||||
"timestamp_column": timestamp_col,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
issues.append(
|
||||
{
|
||||
"type": "general_error",
|
||||
"message": f"Error reading CSV: {str(e)}",
|
||||
"file": file_path,
|
||||
}
|
||||
)
|
||||
|
||||
return issues
|
||||
|
||||
def scan_directory(self, directory_path):
|
||||
"""Scan directory for CSV issues"""
|
||||
print(f"🔍 Scanning directory: {directory_path}")
|
||||
|
||||
csv_files = glob.glob(os.path.join(directory_path, "**/*.csv"), recursive=True)
|
||||
total_files = len(csv_files)
|
||||
|
||||
print(f"📁 Found {total_files} CSV files")
|
||||
|
||||
all_issues = []
|
||||
for i, csv_file in enumerate(csv_files, 1):
|
||||
print(f"📄 Checking {i}/{total_files}: {os.path.basename(csv_file)}")
|
||||
|
||||
issues = self.validate_csv_structure(csv_file)
|
||||
if issues:
|
||||
all_issues.extend(issues)
|
||||
print(f" ⚠️ {len(issues)} issues found")
|
||||
else:
|
||||
print(f" ✅ OK")
|
||||
|
||||
return all_issues
|
||||
|
||||
def fix_csv_file(self, file_path, backup=True):
|
||||
"""Fix a problematic CSV file"""
|
||||
print(f"🔧 Fixing CSV file: {file_path}")
|
||||
|
||||
if backup:
|
||||
backup_path = (
|
||||
f"{file_path}.backup.{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
||||
)
|
||||
import shutil
|
||||
|
||||
shutil.copy2(file_path, backup_path)
|
||||
print(f"📋 Backup created: {backup_path}")
|
||||
|
||||
# Detect encoding and read file
|
||||
encoding = self.detect_encoding(file_path)
|
||||
if not encoding:
|
||||
print("❌ Could not detect encoding")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Read with detected encoding
|
||||
df = pd.read_csv(file_path, encoding=encoding, on_bad_lines="skip")
|
||||
|
||||
# Clean column names
|
||||
df.columns = [
|
||||
col.replace("\ufeff", "").replace("\x00", "").strip()
|
||||
for col in df.columns
|
||||
]
|
||||
|
||||
# Find timestamp column
|
||||
timestamp_cols = [col for col in df.columns if "timestamp" in col.lower()]
|
||||
if timestamp_cols:
|
||||
timestamp_col = timestamp_cols[0]
|
||||
# Normalize to 'timestamp'
|
||||
if timestamp_col != "timestamp":
|
||||
df = df.rename(columns={timestamp_col: "timestamp"})
|
||||
|
||||
# Fix timestamps
|
||||
df["timestamp"] = pd.to_datetime(df["timestamp"], errors="coerce")
|
||||
# Remove rows with invalid timestamps
|
||||
df = df.dropna(subset=["timestamp"])
|
||||
|
||||
# Write fixed file with UTF-8 encoding
|
||||
df.to_csv(file_path, index=False, encoding="utf-8")
|
||||
print(f"✅ Fixed and saved with UTF-8 encoding")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error fixing file: {e}")
|
||||
return False
|
||||
|
||||
def print_issue_summary(self, issues):
|
||||
"""Print a summary of issues found"""
|
||||
if not issues:
|
||||
print("\n🎉 No issues found!")
|
||||
return
|
||||
|
||||
print(f"\n📊 Summary: {len(issues)} issues found")
|
||||
|
||||
issue_types = {}
|
||||
for issue in issues:
|
||||
issue_type = issue["type"]
|
||||
if issue_type not in issue_types:
|
||||
issue_types[issue_type] = []
|
||||
issue_types[issue_type].append(issue)
|
||||
|
||||
for issue_type, type_issues in issue_types.items():
|
||||
print(
|
||||
f"\n🔸 {issue_type.replace('_', ' ').title()}: {len(type_issues)} files"
|
||||
)
|
||||
for issue in type_issues[:5]: # Show first 5
|
||||
print(f" - {os.path.basename(issue['file'])}: {issue['message']}")
|
||||
if len(type_issues) > 5:
|
||||
print(f" ... and {len(type_issues) - 5} more")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="CSV Validator and Cleaner for PLC Streaming System"
|
||||
)
|
||||
parser.add_argument("path", help="Path to CSV file or directory to process")
|
||||
parser.add_argument(
|
||||
"--scan", action="store_true", help="Scan directory for issues (default)"
|
||||
)
|
||||
parser.add_argument("--fix", action="store_true", help="Fix individual CSV file")
|
||||
parser.add_argument(
|
||||
"--fix-all", action="store_true", help="Fix all problematic files in directory"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-backup", action="store_true", help="Do not create backup when fixing"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
validator = CSVValidator()
|
||||
|
||||
if args.fix and os.path.isfile(args.path):
|
||||
# Fix single file
|
||||
success = validator.fix_csv_file(args.path, backup=not args.no_backup)
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
elif os.path.isdir(args.path):
|
||||
# Scan directory
|
||||
issues = validator.scan_directory(args.path)
|
||||
validator.print_issue_summary(issues)
|
||||
|
||||
if args.fix_all and issues:
|
||||
print(
|
||||
f"\n🔧 Fixing {len(set(issue['file'] for issue in issues))} problematic files..."
|
||||
)
|
||||
|
||||
problematic_files = set(
|
||||
issue["file"] for issue in issues if issue["type"] != "no_timestamp"
|
||||
)
|
||||
fixed_count = 0
|
||||
|
||||
for file_path in problematic_files:
|
||||
if validator.fix_csv_file(file_path, backup=not args.no_backup):
|
||||
fixed_count += 1
|
||||
|
||||
print(f"✅ Fixed {fixed_count}/{len(problematic_files)} files")
|
||||
|
||||
sys.exit(1 if issues else 0)
|
||||
|
||||
else:
|
||||
print(f"❌ Path not found: {args.path}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
Loading…
Reference in New Issue