S7_DB_Utils funcionando v1
This commit is contained in:
parent
e85c0c169d
commit
0f162377cd
|
@ -1,15 +1,15 @@
|
||||||
--- Log de Ejecución: x3.py ---
|
--- Log de Ejecución: x3.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-17 21:31:24
|
Inicio: 2025-05-18 02:09:01
|
||||||
Fin: 2025-05-17 21:31:25
|
Fin: 2025-05-18 02:09:01
|
||||||
Duración: 0:00:00.136451
|
Duración: 0:00:00.154928
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos JSON de salida se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
Los archivos JSON de salida se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||||
Archivos encontrados para procesar: 3
|
Archivos encontrados para procesar: 2
|
||||||
|
|
||||||
--- Procesando archivo: db1001_data.db ---
|
--- Procesando archivo: db1001_data.db ---
|
||||||
Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
||||||
|
@ -19,10 +19,6 @@ Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giov
|
||||||
Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||||
Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||||
|
|
||||||
--- Procesando archivo: db1001_format_updated.db ---
|
|
||||||
Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
|
||||||
Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
|
||||||
|
|
||||||
--- Proceso completado ---
|
--- Proceso completado ---
|
||||||
|
|
||||||
--- ERRORES (STDERR) ---
|
--- ERRORES (STDERR) ---
|
||||||
|
|
|
@ -1,34 +1,26 @@
|
||||||
--- Log de Ejecución: x4.py ---
|
--- Log de Ejecución: x4.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-18 00:51:35
|
Inicio: 2025-05-18 02:13:16
|
||||||
Fin: 2025-05-18 00:51:35
|
Fin: 2025-05-18 02:13:16
|
||||||
Duración: 0:00:00.110751
|
Duración: 0:00:00.162328
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Archivos JSON encontrados para procesar: 3
|
Archivos JSON encontrados para procesar: 2
|
||||||
|
|
||||||
--- Procesando archivo JSON: db1001_data.json ---
|
--- Procesando archivo JSON: db1001_data.json ---
|
||||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.txt
|
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.txt
|
||||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.md
|
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.md
|
||||||
|
|
||||||
--- Procesando archivo JSON: db1001_format.json ---
|
--- Procesando archivo JSON: db1001_format.json ---
|
||||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
||||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
||||||
|
|
||||||
--- Procesando archivo JSON: db1001_updated.json ---
|
|
||||||
Archivo JSON 'db1001_updated.json' cargado correctamente.
|
|
||||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
|
||||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
|
||||||
|
|
||||||
--- Proceso de generación de documentación completado ---
|
--- Proceso de generación de documentación completado ---
|
||||||
|
|
||||||
--- ERRORES (STDERR) ---
|
--- ERRORES (STDERR) ---
|
||||||
|
|
|
@ -1,15 +1,15 @@
|
||||||
--- Log de Ejecución: x5.py ---
|
--- Log de Ejecución: x5.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-17 21:51:25
|
Inicio: 2025-05-18 02:19:47
|
||||||
Fin: 2025-05-17 21:51:25
|
Fin: 2025-05-18 02:19:47
|
||||||
Duración: 0:00:00.099104
|
Duración: 0:00:00.125156
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos Markdown de descripción se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
Los archivos Markdown de descripción se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Archivos JSON encontrados para procesar: 3
|
Archivos JSON encontrados para procesar: 2
|
||||||
|
|
||||||
--- Procesando archivo JSON para descripción: db1001_data.json ---
|
--- Procesando archivo JSON para descripción: db1001_data.json ---
|
||||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||||
|
@ -19,10 +19,6 @@ Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet a
|
||||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||||
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_description.md
|
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_description.md
|
||||||
|
|
||||||
--- Procesando archivo JSON para descripción: db1001_format_updated.json ---
|
|
||||||
Archivo JSON 'db1001_format_updated.json' cargado correctamente.
|
|
||||||
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated_description.md
|
|
||||||
|
|
||||||
--- Proceso de generación de descripciones Markdown completado ---
|
--- Proceso de generación de descripciones Markdown completado ---
|
||||||
|
|
||||||
--- ERRORES (STDERR) ---
|
--- ERRORES (STDERR) ---
|
||||||
|
|
|
@ -1,30 +1,25 @@
|
||||||
--- Log de Ejecución: x6.py ---
|
--- Log de Ejecución: x6.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-17 22:05:32
|
Inicio: 2025-05-18 02:20:21
|
||||||
Fin: 2025-05-17 22:05:33
|
Fin: 2025-05-18 02:20:22
|
||||||
Duración: 0:00:00.614471
|
Duración: 0:00:01.130771
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos Excel de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
Los archivos Excel de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Archivos JSON encontrados para procesar: 3
|
Archivos JSON encontrados para procesar: 2
|
||||||
|
|
||||||
--- Procesando archivo JSON para Excel: db1001_data.json ---
|
--- Procesando archivo JSON para Excel: db1001_data.json ---
|
||||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_data.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_data.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json_HMI_Blender_Parameters.xlsx
|
||||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json_HMI_Blender_Parameters.xlsx
|
||||||
|
|
||||||
--- Procesando archivo JSON para Excel: db1001_format.json ---
|
--- Procesando archivo JSON para Excel: db1001_format.json ---
|
||||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json_HMI_Blender_Parameters.xlsx
|
||||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json_HMI_Blender_Parameters.xlsx
|
||||||
|
|
||||||
--- Procesando archivo JSON para Excel: db1001_format_updated.json ---
|
|
||||||
Archivo JSON 'db1001_format_updated.json' cargado correctamente.
|
|
||||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format_updated.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.json.xlsx
|
|
||||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.json.xlsx
|
|
||||||
|
|
||||||
--- Proceso de generación de documentación Excel completado ---
|
--- Proceso de generación de documentación Excel completado ---
|
||||||
|
|
||||||
|
|
|
@ -1,14 +1,15 @@
|
||||||
--- Log de Ejecución: x7_value_updater.py ---
|
--- Log de Ejecución: x7_value_updater.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-18 00:51:17
|
Inicio: 2025-05-18 02:56:24
|
||||||
Fin: 2025-05-18 00:51:18
|
Fin: 2025-05-18 02:56:25
|
||||||
Duración: 0:00:00.223903
|
Duración: 0:00:00.761362
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||||
|
Los archivos de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Se encontraron 1 pares de archivos para procesar.
|
Se encontraron 1 pares de archivos para procesar.
|
||||||
|
|
||||||
--- Procesando par de archivos ---
|
--- Procesando par de archivos ---
|
||||||
|
@ -17,291 +18,14 @@ Format file: db1001_format.db
|
||||||
Parseando archivo data: db1001_data.db
|
Parseando archivo data: db1001_data.db
|
||||||
Parseando archivo format: db1001_format.db
|
Parseando archivo format: db1001_format.db
|
||||||
Archivos JSON generados: db1001_data.json y db1001_format.json
|
Archivos JSON generados: db1001_data.json y db1001_format.json
|
||||||
Aplanando variables por offset...
|
Comparando estructuras para DB 'HMI_Blender_Parameters': 284 variables en _data, 284 variables en _format
|
||||||
Comparando estructuras: 251 variables en _data, 251 variables en _format
|
|
||||||
|
|
||||||
Los archivos son compatibles. Creando el archivo _updated...
|
Los archivos son compatibles. Creando el archivo _updated...
|
||||||
Mapeando STAT0.STAT1.STAT2 -> Processor_Options.Blender_OPT._ModelNum (offset 0.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT3 -> Processor_Options.Blender_OPT._CO2_Offset (offset 2.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT4 -> Processor_Options.Blender_OPT._MaxSyrDeltaBrix (offset 6.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT5 -> Processor_Options.Blender_OPT._BrixMeter (offset 10.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT6 -> Processor_Options.Blender_OPT.Spare101 (offset 10.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT7 -> Processor_Options.Blender_OPT._TrackH2OEnable (offset 10.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT8 -> Processor_Options.Blender_OPT._PAmPDSType (offset 10.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT9 -> Processor_Options.Blender_OPT._HistoricalTrends (offset 10.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT10 -> Processor_Options.Blender_OPT._PowerMeter (offset 10.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT11 -> Processor_Options.Blender_OPT._Report (offset 10.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT12 -> Processor_Options.Blender_OPT._Balaiage (offset 10.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT13 -> Processor_Options.Blender_OPT._Valves_FullFeedback (offset 11.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT14 -> Processor_Options.Blender_OPT._Valves_SingleFeedback (offset 11.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT15 -> Processor_Options.Blender_OPT._PumpsSafetySwitches (offset 11.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT16 -> Processor_Options.Blender_OPT._SurgeProtectionAct (offset 11.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT17 -> Processor_Options.Blender_OPT._DBC_Type (offset 11.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT18 -> Processor_Options.Blender_OPT._CO2InletMeter (offset 11.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT19 -> Processor_Options.Blender_OPT._ProductO2Meter (offset 11.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT20 -> Processor_Options.Blender_OPT._CopressedAirInletMeter (offset 11.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT21 -> Processor_Options.Blender_OPT._MeterType (offset 12.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT22 -> Processor_Options.Blender_OPT._MeterReceiveOnly (offset 14.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT23 -> Processor_Options.Blender_OPT._SyrBrixMeter (offset 14.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT24 -> Processor_Options.Blender_OPT._Flooding_Start_Up (offset 14.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT25 -> Processor_Options.Blender_OPT._FastChangeOverEnabled (offset 14.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT26 -> Processor_Options.Blender_OPT._WaterInletMeter (offset 14.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT27 -> Processor_Options.Blender_OPT._BlendFillSystem (offset 14.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT28 -> Processor_Options.Blender_OPT._TrackFillerSpeed (offset 14.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT29 -> Processor_Options.Blender_OPT._SignalExchange (offset 16.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT30 -> Processor_Options.Blender_OPT._CoolerPresent (offset 18.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT31 -> Processor_Options.Blender_OPT._CoolerControl (offset 20.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT32 -> Processor_Options.Blender_OPT._CoolerType (offset 22.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT33 -> Processor_Options.Blender_OPT._LocalCIP (offset 24.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT34 -> Processor_Options.Blender_OPT._ICS_CustomerHotWater (offset 24.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT35 -> Processor_Options.Blender_OPT._ICS_CustomerChemRecov (offset 24.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT36 -> Processor_Options.Blender_OPT._CIPSignalExchange (offset 24.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT37 -> Processor_Options.Blender_OPT._ICS_CustomerChemicals (offset 24.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT38 -> Processor_Options.Blender_OPT._CarboPresent (offset 24.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT39 -> Processor_Options.Blender_OPT._InverterSyrupPumpPPP302 (offset 24.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT40 -> Processor_Options.Blender_OPT._InverterWaterPumpPPN301 (offset 24.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT41 -> Processor_Options.Blender_OPT._DoubleDeair (offset 25.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT42 -> Processor_Options.Blender_OPT._DeairPreMixed (offset 25.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT43 -> Processor_Options.Blender_OPT._Deaireation (offset 25.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT44 -> Processor_Options.Blender_OPT._StillWaterByPass (offset 25.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT45 -> Processor_Options.Blender_OPT._ManifoldSetting (offset 25.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT46 -> Processor_Options.Blender_OPT._InverterProdPumpPPM303 (offset 25.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT47 -> Processor_Options.Blender_OPT._SidelCip (offset 25.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT48 -> Processor_Options.Blender_OPT._EthernetCom_CpuPN_CP (offset 25.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT49 -> Processor_Options.Blender_OPT._2ndOutlet (offset 26.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT50 -> Processor_Options.Blender_OPT._Promass (offset 28.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT51 -> Processor_Options.Blender_OPT._WaterPromass (offset 30.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT52 -> Processor_Options.Blender_OPT._ProductConductimeter (offset 30.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT53 -> Processor_Options.Blender_OPT._ICS_CustomerH2ORecov (offset 30.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT54 -> Processor_Options.Blender_OPT.Spare303 (offset 30.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT55 -> Processor_Options.Blender_OPT._CO2_GAS2_Injection (offset 30.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT56 -> Processor_Options.Blender_OPT._InverterVacuuPumpPPN304 (offset 30.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT57 -> Processor_Options.Blender_OPT._InverterBoostPumpPPM307 (offset 30.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT58 -> Processor_Options.Blender_OPT._RunOut_Water (offset 30.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT59 -> Processor_Options.Blender_OPT._FlowMeterType (offset 31.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT60 -> Processor_Options.Blender_OPT._SidelFiller (offset 31.1)
|
|
||||||
Mapeando STAT0.STAT1.STAT61 -> Processor_Options.Blender_OPT._Simulation (offset 31.2)
|
|
||||||
Mapeando STAT0.STAT1.STAT62 -> Processor_Options.Blender_OPT._ProductCoolingCTRL (offset 31.3)
|
|
||||||
Mapeando STAT0.STAT1.STAT63 -> Processor_Options.Blender_OPT._ChillerCTRL (offset 31.4)
|
|
||||||
Mapeando STAT0.STAT1.STAT64 -> Processor_Options.Blender_OPT._CO2_SterileFilter (offset 31.5)
|
|
||||||
Mapeando STAT0.STAT1.STAT65 -> Processor_Options.Blender_OPT._InverterRecirPumpPPM306 (offset 31.6)
|
|
||||||
Mapeando STAT0.STAT1.STAT66 -> Processor_Options.Blender_OPT._ProdPressReleaseRVM304 (offset 31.7)
|
|
||||||
Mapeando STAT0.STAT1.STAT67 -> Processor_Options.Blender_OPT._VacuumPump (offset 32.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT68 -> Processor_Options.Blender_OPT._GAS2InjectionType (offset 34.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT69 -> Processor_Options.Blender_OPT._InjectionPress_Ctrl (offset 36.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT70 -> Processor_Options.Blender_OPT._ProdPressureType (offset 38.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT71 -> Processor_Options.Blender_OPT._CIPHeatType (offset 40.0)
|
|
||||||
Mapeando STAT0.STAT1.STAT72 -> Processor_Options.Blender_OPT._EHS_NrRes (offset 42.0)
|
|
||||||
Mapeando STAT73[1] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[2] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[3] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[4] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[5] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[6] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[7] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[8] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT73[9] -> Spare1 (offset 44.0)
|
|
||||||
Mapeando STAT74 -> _RVM301_DeadBand (offset 62.0)
|
|
||||||
Mapeando STAT75 -> _RVM301_Kp (offset 66.0)
|
|
||||||
Mapeando STAT76.STAT77 -> Actual_Recipe_Parameters._Name (offset 70.0)
|
|
||||||
Mapeando STAT76.STAT78 -> Actual_Recipe_Parameters._EnProdTemp (offset 104.0)
|
|
||||||
Mapeando STAT76.STAT79 -> Actual_Recipe_Parameters._SyrFlushing (offset 104.1)
|
|
||||||
Mapeando STAT76.STAT80 -> Actual_Recipe_Parameters._GAS2_Injection (offset 104.2)
|
|
||||||
Mapeando STAT76.STAT81 -> Actual_Recipe_Parameters._Eq_Pression_Selected (offset 104.3)
|
|
||||||
Mapeando STAT76.STAT82 -> Actual_Recipe_Parameters._DeoxStripEn (offset 104.4)
|
|
||||||
Mapeando STAT76.STAT83 -> Actual_Recipe_Parameters._DeoxVacuumEn (offset 104.5)
|
|
||||||
Mapeando STAT76.STAT84 -> Actual_Recipe_Parameters._DeoxPreMixed (offset 104.6)
|
|
||||||
Mapeando STAT76.STAT85 -> Actual_Recipe_Parameters._EnBlowOffProdPipeCO2Fil (offset 104.7)
|
|
||||||
Mapeando STAT76.STAT86 -> Actual_Recipe_Parameters._WaterSelection (offset 105.0)
|
|
||||||
Mapeando STAT76.STAT87 -> Actual_Recipe_Parameters._FillerNextRecipeNum (offset 106.0)
|
|
||||||
Mapeando STAT76.STAT88 -> Actual_Recipe_Parameters._BottleShape (offset 107.0)
|
|
||||||
Mapeando STAT76.STAT89 -> Actual_Recipe_Parameters._Type (offset 108.0)
|
|
||||||
Mapeando STAT76.STAT90 -> Actual_Recipe_Parameters._ProdMeterRecipeNum (offset 110.0)
|
|
||||||
Mapeando STAT76.STAT91 -> Actual_Recipe_Parameters._SyrupBrix (offset 112.0)
|
|
||||||
Mapeando STAT76.STAT92 -> Actual_Recipe_Parameters._SyrupDensity (offset 116.0)
|
|
||||||
Mapeando STAT76.STAT93 -> Actual_Recipe_Parameters._SyrupFactor (offset 120.0)
|
|
||||||
Mapeando STAT76.STAT94 -> Actual_Recipe_Parameters._ProductBrix (offset 124.0)
|
|
||||||
Mapeando STAT76.STAT95 -> Actual_Recipe_Parameters._ProductionRate (offset 128.0)
|
|
||||||
Mapeando STAT76.STAT96 -> Actual_Recipe_Parameters._Ratio (offset 132.0)
|
|
||||||
Mapeando STAT76.STAT97 -> Actual_Recipe_Parameters._ProdBrixOffset (offset 136.0)
|
|
||||||
Mapeando STAT76.STAT98 -> Actual_Recipe_Parameters._CO2Vols (offset 140.0)
|
|
||||||
Mapeando STAT76.STAT99 -> Actual_Recipe_Parameters._CO2Fact (offset 144.0)
|
|
||||||
Mapeando STAT76.STAT100 -> Actual_Recipe_Parameters._ProdTankPress (offset 148.0)
|
|
||||||
Mapeando STAT76.STAT101 -> Actual_Recipe_Parameters._SP_ProdTemp (offset 152.0)
|
|
||||||
Mapeando STAT76.STAT102 -> Actual_Recipe_Parameters._PrdTankMinLevel (offset 156.0)
|
|
||||||
Mapeando STAT76.STAT103 -> Actual_Recipe_Parameters._WaterValveSave (offset 160.0)
|
|
||||||
Mapeando STAT76.STAT104 -> Actual_Recipe_Parameters._SyrupValveSave (offset 164.0)
|
|
||||||
Mapeando STAT76.STAT105 -> Actual_Recipe_Parameters._CarboCO2ValveSave (offset 168.0)
|
|
||||||
Mapeando STAT76.STAT106 -> Actual_Recipe_Parameters._ProdMeterHighBrix (offset 172.0)
|
|
||||||
Mapeando STAT76.STAT107 -> Actual_Recipe_Parameters._ProdMeterLowBrix (offset 176.0)
|
|
||||||
Mapeando STAT76.STAT108 -> Actual_Recipe_Parameters._ProdMeterHighCO2 (offset 180.0)
|
|
||||||
Mapeando STAT76.STAT109 -> Actual_Recipe_Parameters._ProdMeterLowCO2 (offset 184.0)
|
|
||||||
Mapeando STAT76.STAT110 -> Actual_Recipe_Parameters._ProdMeter_ZeroCO2 (offset 188.0)
|
|
||||||
Mapeando STAT76.STAT111 -> Actual_Recipe_Parameters._ProdMeter_ZeroBrix (offset 192.0)
|
|
||||||
Mapeando STAT76.STAT112 -> Actual_Recipe_Parameters._ProdHighCond (offset 196.0)
|
|
||||||
Mapeando STAT76.STAT113 -> Actual_Recipe_Parameters._ProdLowCond (offset 200.0)
|
|
||||||
Mapeando STAT76.STAT114 -> Actual_Recipe_Parameters._BottleSize (offset 204.0)
|
|
||||||
Mapeando STAT76.STAT115 -> Actual_Recipe_Parameters._FillingValveHead_SP (offset 208.0)
|
|
||||||
Mapeando STAT76.STAT116 -> Actual_Recipe_Parameters._SyrMeter_ZeroBrix (offset 212.0)
|
|
||||||
Mapeando STAT76.STAT117 -> Actual_Recipe_Parameters._FirstProdExtraCO2Fact (offset 216.0)
|
|
||||||
Mapeando STAT76.STAT118 -> Actual_Recipe_Parameters._Gas2Vols (offset 220.0)
|
|
||||||
Mapeando STAT76.STAT119 -> Actual_Recipe_Parameters._Gas2Fact (offset 224.0)
|
|
||||||
Mapeando STAT76.STAT120 -> Actual_Recipe_Parameters._SyrupPumpPressure (offset 228.0)
|
|
||||||
Mapeando STAT76.STAT121 -> Actual_Recipe_Parameters._WaterPumpPressure (offset 232.0)
|
|
||||||
Mapeando STAT76.STAT122 -> Actual_Recipe_Parameters._CO2_Air_N2_PressSelect (offset 236.0)
|
|
||||||
Mapeando STAT76.STAT123 -> Actual_Recipe_Parameters._KFactRVM304BlowOff (offset 238.0)
|
|
||||||
Mapeando STAT76.STAT124 -> Actual_Recipe_Parameters._ProdRecircPumpFreq (offset 242.0)
|
|
||||||
Mapeando STAT76.STAT125 -> Actual_Recipe_Parameters._ProdBoosterPumpPress (offset 246.0)
|
|
||||||
Mapeando STAT76.STAT126 -> Actual_Recipe_Parameters._ProdSendPumpFreq (offset 250.0)
|
|
||||||
Mapeando STAT127[1] -> Spare2 (offset 254.0)
|
|
||||||
Mapeando STAT127[2] -> Spare2 (offset 254.0)
|
|
||||||
Mapeando STAT127[3] -> Spare2 (offset 254.0)
|
|
||||||
Mapeando STAT127[4] -> Spare2 (offset 254.0)
|
|
||||||
Mapeando STAT127[5] -> Spare2 (offset 254.0)
|
|
||||||
Mapeando STAT128 -> Next_Recipe_Name (offset 264.0)
|
|
||||||
Mapeando STAT129 -> Next_Recipe_Number (offset 298.0)
|
|
||||||
Mapeando STAT130[1] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[2] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[3] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[4] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[5] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[6] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[7] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[8] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[9] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[10] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[11] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[12] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[13] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[14] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[15] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[16] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[17] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT130[18] -> Spare3 (offset 300.0)
|
|
||||||
Mapeando STAT131.STAT132 -> ProcessSetup.Spare000 (offset 336.0)
|
|
||||||
Mapeando STAT131.STAT133 -> ProcessSetup.Spare040 (offset 340.0)
|
|
||||||
Mapeando STAT131.STAT134 -> ProcessSetup._KWaterLoss (offset 344.0)
|
|
||||||
Mapeando STAT131.STAT135 -> ProcessSetup._KSyrupLoss (offset 348.0)
|
|
||||||
Mapeando STAT131.STAT136 -> ProcessSetup._KProdLoss (offset 352.0)
|
|
||||||
Mapeando STAT131.STAT137 -> ProcessSetup._KPPM303 (offset 356.0)
|
|
||||||
Mapeando STAT131.STAT138 -> ProcessSetup._BaialageRVM301OVMin (offset 360.0)
|
|
||||||
Mapeando STAT131.STAT139 -> ProcessSetup._SyrupLinePressure (offset 364.0)
|
|
||||||
Mapeando STAT131.STAT140 -> ProcessSetup._CIPRMM301OV (offset 368.0)
|
|
||||||
Mapeando STAT131.STAT141 -> ProcessSetup._CIPRMP302OV (offset 372.0)
|
|
||||||
Mapeando STAT131.STAT142 -> ProcessSetup._CIPTM301MinLevel (offset 376.0)
|
|
||||||
Mapeando STAT131.STAT143 -> ProcessSetup._CIPTM301MaxLevel (offset 380.0)
|
|
||||||
Mapeando STAT131.STAT144 -> ProcessSetup._CIPPPM303Freq (offset 384.0)
|
|
||||||
Mapeando STAT131.STAT145 -> ProcessSetup._CIPTP301MinLevel (offset 388.0)
|
|
||||||
Mapeando STAT131.STAT146 -> ProcessSetup._CIPTP301MaxLevel (offset 392.0)
|
|
||||||
Mapeando STAT131.STAT147 -> ProcessSetup._RinseRMM301OV (offset 396.0)
|
|
||||||
Mapeando STAT131.STAT148 -> ProcessSetup._RinseRMP302OV (offset 400.0)
|
|
||||||
Mapeando STAT131.STAT149 -> ProcessSetup._RinseTM301Press (offset 404.0)
|
|
||||||
Mapeando STAT131.STAT150 -> ProcessSetup._RinsePPM303Freq (offset 408.0)
|
|
||||||
Mapeando STAT131.STAT151 -> ProcessSetup._DrainTM301Press (offset 412.0)
|
|
||||||
Mapeando STAT131.STAT152 -> ProcessSetup._KRecBlendError (offset 416.0)
|
|
||||||
Mapeando STAT131.STAT153 -> ProcessSetup._KRecCarboCO2Error (offset 420.0)
|
|
||||||
Mapeando STAT131.STAT154 -> ProcessSetup._MaxBlendError (offset 424.0)
|
|
||||||
Mapeando STAT131.STAT155 -> ProcessSetup._MaxCarboCO2Error (offset 428.0)
|
|
||||||
Mapeando STAT131.STAT156 -> ProcessSetup._StartUpBrixExtraWater (offset 432.0)
|
|
||||||
Mapeando STAT131.STAT157 -> ProcessSetup._StartUpCO2ExtraWater (offset 436.0)
|
|
||||||
Mapeando STAT131.STAT158 -> ProcessSetup._StartUpPPM303Freq (offset 440.0)
|
|
||||||
Mapeando STAT131.STAT159 -> ProcessSetup._SyrupRoomTank (offset 444.0)
|
|
||||||
Mapeando STAT131.STAT160 -> ProcessSetup._SyrupRunOutLiters (offset 446.0)
|
|
||||||
Mapeando STAT131.STAT161 -> ProcessSetup._InjCO2Press_Offset (offset 450.0)
|
|
||||||
Mapeando STAT131.STAT162 -> ProcessSetup._InjCO2Press_MinFlow (offset 454.0)
|
|
||||||
Mapeando STAT131.STAT163 -> ProcessSetup._InjCO2Press_MaxFlow (offset 458.0)
|
|
||||||
Mapeando STAT131.STAT164 -> ProcessSetup._CarboCO2Pressure (offset 462.0)
|
|
||||||
Mapeando STAT131.STAT165 -> ProcessSetup._N2MinPressure (offset 466.0)
|
|
||||||
Mapeando STAT131.STAT166 -> ProcessSetup._DiffSensor_Height (offset 470.0)
|
|
||||||
Mapeando STAT131.STAT167 -> ProcessSetup._DiffSensor_DeltaHeight (offset 474.0)
|
|
||||||
Mapeando STAT131.STAT168 -> ProcessSetup._DiffSensor_Offset (offset 478.0)
|
|
||||||
Mapeando STAT131.STAT169 -> ProcessSetup._FillingValveHeight (offset 482.0)
|
|
||||||
Mapeando STAT131.STAT170 -> ProcessSetup._FillerDiameter (offset 486.0)
|
|
||||||
Mapeando STAT131.STAT171 -> ProcessSetup._FillingValveNum (offset 490.0)
|
|
||||||
Mapeando STAT131.STAT172 -> ProcessSetup._FillerProdPipeDN (offset 492.0)
|
|
||||||
Mapeando STAT131.STAT173 -> ProcessSetup._FillerProdPipeMass (offset 496.0)
|
|
||||||
Mapeando STAT131.STAT174 -> ProcessSetup._FillingTime (offset 500.0)
|
|
||||||
Mapeando STAT131.STAT175 -> ProcessSetup._TM301Height_0 (offset 504.0)
|
|
||||||
Mapeando STAT131.STAT176 -> ProcessSetup._TM301LevelPerc_2 (offset 508.0)
|
|
||||||
Mapeando STAT131.STAT177 -> ProcessSetup._TM301Height_2 (offset 512.0)
|
|
||||||
Mapeando STAT131.STAT178 -> ProcessSetup._RVN304Factor (offset 516.0)
|
|
||||||
Mapeando STAT131.STAT179 -> ProcessSetup._DrainTM301Flushing (offset 520.0)
|
|
||||||
Mapeando STAT131.STAT180 -> ProcessSetup._FirstProdExtraBrix (offset 524.0)
|
|
||||||
Mapeando STAT131.STAT181 -> ProcessSetup._FirstProdDietExtraSyr (offset 528.0)
|
|
||||||
Mapeando STAT131.STAT182 -> ProcessSetup._EndProdLastSyrlt (offset 532.0)
|
|
||||||
Mapeando STAT131.STAT183 -> ProcessSetup._TM301DrainSt0Time (offset 536.0)
|
|
||||||
Mapeando STAT131.STAT184 -> ProcessSetup._TM301DrainSt1Time (offset 538.0)
|
|
||||||
Mapeando STAT131.STAT185 -> ProcessSetup._ProdPipeRunOutSt0Time (offset 540.0)
|
|
||||||
Mapeando STAT131.STAT186 -> ProcessSetup._RMM301ProdPipeRunOu (offset 542.0)
|
|
||||||
Mapeando STAT131.STAT187 -> ProcessSetup._RMP302ProdPipeRunOu (offset 546.0)
|
|
||||||
Mapeando STAT131.STAT188 -> ProcessSetup._ProdPipeRunOutAmount (offset 550.0)
|
|
||||||
Mapeando STAT131.STAT189 -> ProcessSetup._TM301RunOutChiller (offset 554.0)
|
|
||||||
Mapeando STAT131.STAT190 -> ProcessSetup._MinSpeedNominalProd (offset 558.0)
|
|
||||||
Mapeando STAT131.STAT191 -> ProcessSetup._MinSpeedSlowProd (offset 562.0)
|
|
||||||
Mapeando STAT131.STAT192 -> ProcessSetup._FastChgOvrTM301DrnPrss (offset 566.0)
|
|
||||||
Mapeando STAT131.STAT193 -> ProcessSetup._CIPTN301MinLevel (offset 570.0)
|
|
||||||
Mapeando STAT131.STAT194 -> ProcessSetup._CIPTN301MaxLevel (offset 574.0)
|
|
||||||
Mapeando STAT131.STAT195 -> ProcessSetup._ProdPPN304Freq (offset 578.0)
|
|
||||||
Mapeando STAT131.STAT196 -> ProcessSetup._GAS2InjectionPress (offset 582.0)
|
|
||||||
Mapeando STAT131.STAT197 -> ProcessSetup._BaialageRVM301OVMax (offset 586.0)
|
|
||||||
Mapeando STAT131.STAT198 -> ProcessSetup._RinsePPN301Freq (offset 590.0)
|
|
||||||
Mapeando STAT131.STAT199 -> ProcessSetup._CIPPPN301Freq (offset 594.0)
|
|
||||||
Mapeando STAT131.STAT200 -> ProcessSetup._RinsePPP302Freq (offset 598.0)
|
|
||||||
Mapeando STAT131.STAT201 -> ProcessSetup._CIPPPP302Freq (offset 602.0)
|
|
||||||
Mapeando STAT131.STAT202 -> ProcessSetup._PercSyrupBrixSyrStarUp (offset 606.0)
|
|
||||||
Mapeando STAT131.STAT203 -> ProcessSetup._RefTempCoolingCTRL (offset 610.0)
|
|
||||||
Mapeando STAT131.STAT204 -> ProcessSetup._H2OSerpPrimingVolume (offset 614.0)
|
|
||||||
Mapeando STAT131.STAT205 -> ProcessSetup._AVN301_Nozzle_Kv (offset 618.0)
|
|
||||||
Mapeando STAT131.STAT206 -> ProcessSetup._AVN302_Nozzle_Kv (offset 622.0)
|
|
||||||
Mapeando STAT131.STAT207 -> ProcessSetup._AVN303_Nozzle_Kv (offset 626.0)
|
|
||||||
Mapeando STAT131.STAT208 -> ProcessSetup._DeoxSpryball_Kv (offset 630.0)
|
|
||||||
Mapeando STAT131.STAT209 -> ProcessSetup._PremixedLineDrainTime (offset 634.0)
|
|
||||||
Mapeando STAT131.STAT210 -> ProcessSetup._PPN301_H_MaxFlow (offset 636.0)
|
|
||||||
Mapeando STAT131.STAT211 -> ProcessSetup._PPN301_H_MinFlow (offset 640.0)
|
|
||||||
Mapeando STAT131.STAT212 -> ProcessSetup._PPN301_MaxFlow (offset 644.0)
|
|
||||||
Mapeando STAT131.STAT213 -> ProcessSetup._PPN301_MinFlow (offset 648.0)
|
|
||||||
Mapeando STAT131.STAT214 -> ProcessSetup._PPP302_H_MaxFlow (offset 652.0)
|
|
||||||
Mapeando STAT131.STAT215 -> ProcessSetup._PPP302_H_MinFlow (offset 656.0)
|
|
||||||
Mapeando STAT131.STAT216 -> ProcessSetup._PPP302_MaxFlow (offset 660.0)
|
|
||||||
Mapeando STAT131.STAT217 -> ProcessSetup._PPP302_MinFlow (offset 664.0)
|
|
||||||
Mapeando STAT131.STAT218 -> ProcessSetup._RinsePPM306Freq (offset 668.0)
|
|
||||||
Mapeando STAT131.STAT219 -> ProcessSetup._CIPPPM306Freq (offset 672.0)
|
|
||||||
Mapeando STAT131.STAT220 -> ProcessSetup._PPM307_H_MaxFlow (offset 676.0)
|
|
||||||
Mapeando STAT131.STAT221 -> ProcessSetup._PPM307_H_MinFlow (offset 680.0)
|
|
||||||
Mapeando STAT131.STAT222 -> ProcessSetup._PPM307_MaxFlow (offset 684.0)
|
|
||||||
Mapeando STAT131.STAT223 -> ProcessSetup._PPM307_MinFlow (offset 688.0)
|
|
||||||
Mapeando STAT131.STAT224 -> ProcessSetup._Temp0_VacuumCtrl (offset 692.0)
|
|
||||||
Mapeando STAT131.STAT225 -> ProcessSetup._Temp1_VacuumCtrl (offset 696.0)
|
|
||||||
Mapeando STAT131.STAT226 -> ProcessSetup._Temp2_VacuumCtrl (offset 700.0)
|
|
||||||
Mapeando STAT131.STAT227 -> ProcessSetup._Temp3_VacuumCtrl (offset 704.0)
|
|
||||||
Mapeando STAT131.STAT228 -> ProcessSetup._Temp4_VacuumCtrl (offset 708.0)
|
|
||||||
Mapeando STAT131.STAT229 -> ProcessSetup._T1_VacuumCtrl (offset 712.0)
|
|
||||||
Mapeando STAT131.STAT230 -> ProcessSetup._T2_VacuumCtrl (offset 716.0)
|
|
||||||
Mapeando STAT131.STAT231 -> ProcessSetup._T3_VacuumCtrl (offset 720.0)
|
|
||||||
Mapeando STAT131.STAT232 -> ProcessSetup._T4_VacuumCtrl (offset 724.0)
|
|
||||||
Mapeando STAT131.STAT233 -> ProcessSetup._ICS_VolDosWorkTimePAA (offset 728.0)
|
|
||||||
Mapeando STAT131.STAT234 -> ProcessSetup._ICS_VolPauseTimePAA (offset 730.0)
|
|
||||||
Mapeando STAT131.STAT235 -> ProcessSetup._ICS_PAAPulseWeight (offset 732.0)
|
|
||||||
Mapeando STAT131.STAT236 -> ProcessSetup._ICS_CausticPulseWeight (offset 734.0)
|
|
||||||
Mapeando STAT131.STAT237 -> ProcessSetup._ICS_AcidPulseWeight (offset 736.0)
|
|
||||||
Mapeando STAT131.STAT238 -> ProcessSetup._ICS_VolumeRestOfLine (offset 738.0)
|
|
||||||
Mapeando STAT131.STAT239 -> ProcessSetup._ICS_VolDosWorkTimeCaus (offset 742.0)
|
|
||||||
Mapeando STAT131.STAT240 -> ProcessSetup._ICS_VolDosPauseTimeCaus (offset 744.0)
|
|
||||||
Mapeando STAT131.STAT241 -> ProcessSetup._ICS_VolDosWorkTimeAcid (offset 746.0)
|
|
||||||
Mapeando STAT131.STAT242 -> ProcessSetup._ICS_VolDosPauseTimeAcid (offset 748.0)
|
|
||||||
Mapeando STAT131.STAT243 -> ProcessSetup._ICS_ConcDosWorkTimeCaus (offset 750.0)
|
|
||||||
Mapeando STAT131.STAT244 -> ProcessSetup._ICS_ConcDosPausTimeCaus (offset 752.0)
|
|
||||||
Mapeando STAT131.STAT245 -> ProcessSetup._ICS_ConcDosWorkTimeAcid (offset 754.0)
|
|
||||||
Mapeando STAT131.STAT246 -> ProcessSetup._ICS_ConcDosPausTimeAcid (offset 756.0)
|
|
||||||
Mapeando STAT131.STAT247 -> ProcessSetup._RinsePPM307Freq (offset 758.0)
|
|
||||||
Mapeando STAT131.STAT248 -> ProcessSetup._CIPPPM307Freq (offset 762.0)
|
|
||||||
Mapeando STAT131.STAT249 -> ProcessSetup._CIP2StepTN301Lvl (offset 766.0)
|
|
||||||
Mapeando STAT131.STAT250 -> ProcessSetup._CIP2StepTM301Lvl (offset 770.0)
|
|
||||||
Mapeando STAT131.STAT251 -> ProcessSetup._CIP2StepTP301Lvl (offset 774.0)
|
|
||||||
Mapeando STAT131.STAT252 -> ProcessSetup._PumpNominalFreq (offset 778.0)
|
|
||||||
Mapeando STAT253 -> _SwitchOff_DensityOK (offset 782.0)
|
|
||||||
Mapeando STAT254 -> STAT254 (offset 784.0)
|
|
||||||
Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
||||||
|
Archivo de comparación Excel generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
||||||
|
Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
||||||
|
Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
||||||
|
Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||||
|
|
||||||
--- Proceso completado ---
|
--- Proceso completado ---
|
||||||
|
|
||||||
|
|
|
@ -42,9 +42,9 @@
|
||||||
"hidden": false
|
"hidden": false
|
||||||
},
|
},
|
||||||
"x7_value_updater.py": {
|
"x7_value_updater.py": {
|
||||||
"display_name": "x7_value_updater",
|
"display_name": "07: Actualizar Valores de DB (JSON)",
|
||||||
"short_description": "Sin descripción corta.",
|
"short_description": "Busca archivos .db o .awl con la terminacion _data y _format. Si los encuentra y son compatibles usa los datos de _data para generar un _updated con los nombres de las variables de _format",
|
||||||
"long_description": "",
|
"long_description": "Procesa pares de archivos a JSON (_data.json y _format.json, generados por x3.py). Compara sus estructuras por offset para asegurar compatibilidad. Si son compatibles, crea un nuevo archivo _updated.json que combina la estructura del _format.json con los valores actuales del _data.json.",
|
||||||
"hidden": false
|
"hidden": false
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -1,10 +1,10 @@
|
||||||
# --- x3.py (Modificaciones v_final_4 - Incluye 'count' para ArrayDimension y ajuste debug) ---
|
# --- x3_refactored.py ---
|
||||||
import re
|
import re
|
||||||
import json
|
import json
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from typing import List, Dict, Optional, Union, Tuple, Any
|
from typing import List, Dict, Optional, Union, Tuple, Any
|
||||||
import os # Asegurarse de que os está importado
|
import os
|
||||||
import glob # Para buscar archivos
|
import glob
|
||||||
import copy
|
import copy
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
@ -29,11 +29,11 @@ class ArrayDimension:
|
||||||
upper_bound: int
|
upper_bound: int
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def count(self) -> int: # La propiedad 'count' se calculará
|
def count(self) -> int:
|
||||||
return self.upper_bound - self.lower_bound + 1
|
return self.upper_bound - self.lower_bound + 1
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class VariableInfo: # Sin cambios respecto a v_final_3
|
class VariableInfo:
|
||||||
name: str
|
name: str
|
||||||
data_type: str
|
data_type: str
|
||||||
byte_offset: float
|
byte_offset: float
|
||||||
|
@ -50,7 +50,7 @@ class VariableInfo: # Sin cambios respecto a v_final_3
|
||||||
current_element_values: Optional[Dict[str, str]] = None
|
current_element_values: Optional[Dict[str, str]] = None
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class UdtInfo: # Sin cambios respecto a v_final_3
|
class UdtInfo:
|
||||||
name: str
|
name: str
|
||||||
family: Optional[str] = None
|
family: Optional[str] = None
|
||||||
version: Optional[str] = None
|
version: Optional[str] = None
|
||||||
|
@ -58,23 +58,23 @@ class UdtInfo: # Sin cambios respecto a v_final_3
|
||||||
total_size_in_bytes: int = 0
|
total_size_in_bytes: int = 0
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class DbInfo: # Sin cambios respecto a v_final_3
|
class DbInfo:
|
||||||
name: str
|
name: str
|
||||||
title: Optional[str] = None
|
title: Optional[str] = None
|
||||||
family: Optional[str] = None
|
family: Optional[str] = None
|
||||||
version: Optional[str] = None
|
version: Optional[str] = None
|
||||||
members: List[VariableInfo] = field(default_factory=list)
|
members: List[VariableInfo] = field(default_factory=list)
|
||||||
total_size_in_bytes: int = 0
|
total_size_in_bytes: int = 0
|
||||||
_begin_block_assignments_ordered: List[Tuple[str, str]] = field(default_factory=list)
|
# Eliminamos los campos redundantes:
|
||||||
_initial_values_from_begin_block: Dict[str, str] = field(default_factory=dict)
|
# _begin_block_assignments_ordered y _initial_values_from_begin_block
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ParsedData: # Sin cambios
|
class ParsedData:
|
||||||
udts: List[UdtInfo] = field(default_factory=list)
|
udts: List[UdtInfo] = field(default_factory=list)
|
||||||
dbs: List[DbInfo] = field(default_factory=list)
|
dbs: List[DbInfo] = field(default_factory=list)
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class OffsetContext: # Sin cambios
|
class OffsetContext:
|
||||||
byte_offset: int = 0
|
byte_offset: int = 0
|
||||||
bit_offset: int = 0
|
bit_offset: int = 0
|
||||||
def get_combined_offset(self) -> float:
|
def get_combined_offset(self) -> float:
|
||||||
|
@ -89,7 +89,7 @@ class OffsetContext: # Sin cambios
|
||||||
if self.byte_offset % 2 != 0: self.byte_offset += 1
|
if self.byte_offset % 2 != 0: self.byte_offset += 1
|
||||||
# --- Fin Estructuras de Datos ---
|
# --- Fin Estructuras de Datos ---
|
||||||
|
|
||||||
S7_PRIMITIVE_SIZES = { # Sin cambios
|
S7_PRIMITIVE_SIZES = {
|
||||||
"BOOL": (0, 1, True), "BYTE": (1, 1, False), "CHAR": (1, 1, False),
|
"BOOL": (0, 1, True), "BYTE": (1, 1, False), "CHAR": (1, 1, False),
|
||||||
"SINT": (1, 1, False), "USINT": (1, 1, False), "WORD": (2, 2, False),
|
"SINT": (1, 1, False), "USINT": (1, 1, False), "WORD": (2, 2, False),
|
||||||
"INT": (2, 2, False), "UINT": (2, 2, False), "S5TIME": (2, 2, False),
|
"INT": (2, 2, False), "UINT": (2, 2, False), "S5TIME": (2, 2, False),
|
||||||
|
@ -100,7 +100,7 @@ S7_PRIMITIVE_SIZES = { # Sin cambios
|
||||||
"LWORD": (8, 2, False), "DATE_AND_TIME": (8, 2, False), "DT": (8, 2, False),
|
"LWORD": (8, 2, False), "DATE_AND_TIME": (8, 2, False), "DT": (8, 2, False),
|
||||||
}
|
}
|
||||||
|
|
||||||
class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
class S7Parser:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.parsed_data = ParsedData()
|
self.parsed_data = ParsedData()
|
||||||
self.known_udts: Dict[str, UdtInfo] = {}
|
self.known_udts: Dict[str, UdtInfo] = {}
|
||||||
|
@ -125,7 +125,7 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
)
|
)
|
||||||
self.array_dim_regex = re.compile(r'(\d+)\s*\.\.\s*(\d+)')
|
self.array_dim_regex = re.compile(r'(\d+)\s*\.\.\s*(\d+)')
|
||||||
|
|
||||||
def _get_type_details(self, type_name_raw_cleaned: str) -> Tuple[int, int, bool, str]: # Sin cambios
|
def _get_type_details(self, type_name_raw_cleaned: str) -> Tuple[int, int, bool, str]:
|
||||||
type_name_upper = type_name_raw_cleaned.upper()
|
type_name_upper = type_name_raw_cleaned.upper()
|
||||||
if type_name_upper in S7_PRIMITIVE_SIZES:
|
if type_name_upper in S7_PRIMITIVE_SIZES:
|
||||||
size, align, is_bool = S7_PRIMITIVE_SIZES[type_name_upper]
|
size, align, is_bool = S7_PRIMITIVE_SIZES[type_name_upper]
|
||||||
|
@ -138,7 +138,7 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
raise ValueError(f"Tipo de dato desconocido o UDT no definido: '{type_name_raw_cleaned}'")
|
raise ValueError(f"Tipo de dato desconocido o UDT no definido: '{type_name_raw_cleaned}'")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _adjust_children_offsets(children: List[VariableInfo], base_offset_add: float): # Sin cambios
|
def _adjust_children_offsets(children: List[VariableInfo], base_offset_add: float):
|
||||||
for child in children:
|
for child in children:
|
||||||
child.byte_offset += base_offset_add
|
child.byte_offset += base_offset_add
|
||||||
if child.byte_offset == float(int(child.byte_offset)):
|
if child.byte_offset == float(int(child.byte_offset)):
|
||||||
|
@ -149,7 +149,7 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
def _parse_struct_members(self, lines: List[str], current_line_idx: int,
|
def _parse_struct_members(self, lines: List[str], current_line_idx: int,
|
||||||
parent_members_list: List[VariableInfo],
|
parent_members_list: List[VariableInfo],
|
||||||
active_context: OffsetContext,
|
active_context: OffsetContext,
|
||||||
is_top_level_struct_in_block: bool = False) -> int: # Ajuste en depuración
|
is_top_level_struct_in_block: bool = False) -> int:
|
||||||
idx_to_process = current_line_idx
|
idx_to_process = current_line_idx
|
||||||
while idx_to_process < len(lines):
|
while idx_to_process < len(lines):
|
||||||
original_line_text = lines[idx_to_process].strip()
|
original_line_text = lines[idx_to_process].strip()
|
||||||
|
@ -178,11 +178,11 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
active_context.align_to_byte()
|
active_context.align_to_byte()
|
||||||
if active_context.byte_offset % 2 != 0: active_context.byte_offset += 1
|
if active_context.byte_offset % 2 != 0: active_context.byte_offset += 1
|
||||||
return line_index_for_return
|
return line_index_for_return
|
||||||
if is_main_block_end_struct: # Simplemente lo ignoramos aquí, será manejado por END_TYPE/DB
|
if is_main_block_end_struct:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
var_match = self.var_regex_simplified.match(line_to_parse)
|
var_match = self.var_regex_simplified.match(line_to_parse)
|
||||||
if var_match: # Lógica de var_match sin cambios respecto a v_final_3
|
if var_match:
|
||||||
var_data = var_match.groupdict()
|
var_data = var_match.groupdict()
|
||||||
raw_base_type_from_regex = var_data['basetype'].strip()
|
raw_base_type_from_regex = var_data['basetype'].strip()
|
||||||
clean_data_type = raw_base_type_from_regex.strip('"')
|
clean_data_type = raw_base_type_from_regex.strip('"')
|
||||||
|
@ -244,54 +244,186 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
if expanded_member.children: S7Parser._adjust_children_offsets(expanded_member.children, udt_instance_abs_start_offset)
|
if expanded_member.children: S7Parser._adjust_children_offsets(expanded_member.children, udt_instance_abs_start_offset)
|
||||||
var_info.children.append(expanded_member)
|
var_info.children.append(expanded_member)
|
||||||
parent_members_list.append(var_info)
|
parent_members_list.append(var_info)
|
||||||
# Ajuste de la condición del mensaje de depuración
|
|
||||||
elif line_to_parse and \
|
elif line_to_parse and \
|
||||||
not self.struct_start_regex.match(line_to_parse) and \
|
not self.struct_start_regex.match(line_to_parse) and \
|
||||||
not is_main_block_end_struct and \
|
not is_main_block_end_struct and \
|
||||||
not is_nested_end_struct and \
|
not is_nested_end_struct and \
|
||||||
not is_block_terminator : # Solo imprimir si no es un terminador conocido
|
not is_block_terminator :
|
||||||
print(f"DEBUG (_parse_struct_members): Line not parsed: Original='{original_line_text}' | Processed='{line_to_parse}'")
|
print(f"DEBUG (_parse_struct_members): Line not parsed: Original='{original_line_text}' | Processed='{line_to_parse}'")
|
||||||
return idx_to_process
|
return idx_to_process
|
||||||
|
|
||||||
def _parse_begin_block(self, lines: List[str], start_idx: int, db_info: DbInfo) -> int: # Sin cambios
|
def _parse_begin_block(self, lines: List[str], start_idx: int, db_info: DbInfo) -> int:
|
||||||
|
"""
|
||||||
|
Parsea el bloque BEGIN y aplica directamente los valores a las variables
|
||||||
|
correspondientes, calculando también offsets para elementos de arrays.
|
||||||
|
"""
|
||||||
idx = start_idx
|
idx = start_idx
|
||||||
assignment_regex = re.compile(r'^\s*(?P<path>.+?)\s*:=\s*(?P<value>.+?)\s*;?\s*$', re.IGNORECASE)
|
assignment_regex = re.compile(r'^\s*(?P<path>.+?)\s*:=\s*(?P<value>.+?)\s*;?\s*$', re.IGNORECASE)
|
||||||
|
|
||||||
|
# Diccionario temporal para mapear rutas a variables
|
||||||
|
path_to_var_map = {}
|
||||||
|
|
||||||
|
# Función para calcular offset de elemento de array
|
||||||
|
def calculate_array_element_offset(var: VariableInfo, indices_str: str) -> float:
|
||||||
|
# Parsear los índices (pueden ser múltiples para arrays multidimensionales)
|
||||||
|
indices = [int(idx.strip()) for idx in indices_str.split(',')]
|
||||||
|
|
||||||
|
# Obtener dimensiones del array
|
||||||
|
dimensions = var.array_dimensions
|
||||||
|
if not dimensions or len(indices) != len(dimensions):
|
||||||
|
return var.byte_offset # No podemos calcular, devolver offset base
|
||||||
|
|
||||||
|
# Determinar tamaño de cada elemento base
|
||||||
|
element_size = 0
|
||||||
|
is_bit_array = False
|
||||||
|
|
||||||
|
if var.data_type.upper() == "BOOL":
|
||||||
|
is_bit_array = True
|
||||||
|
element_size = 0.1 # 0.1 byte = 1 bit (representación decimal)
|
||||||
|
elif var.data_type.upper() == "STRING" and var.string_length is not None:
|
||||||
|
element_size = var.string_length + 2
|
||||||
|
else:
|
||||||
|
# Para tipos primitivos y UDTs
|
||||||
|
data_type_upper = var.data_type.upper()
|
||||||
|
if data_type_upper in S7_PRIMITIVE_SIZES:
|
||||||
|
element_size = S7_PRIMITIVE_SIZES[data_type_upper][0]
|
||||||
|
elif var.data_type in self.known_udts:
|
||||||
|
element_size = self.known_udts[var.data_type].total_size_in_bytes
|
||||||
|
else:
|
||||||
|
# Si no podemos determinar tamaño, usar tamaño total / elementos
|
||||||
|
total_elements = 1
|
||||||
|
for dim in dimensions:
|
||||||
|
total_elements *= dim.count
|
||||||
|
if total_elements > 0 and var.size_in_bytes > 0:
|
||||||
|
element_size = var.size_in_bytes / total_elements
|
||||||
|
|
||||||
|
# Calcular offset para arrays multidimensionales
|
||||||
|
# Necesitamos calcular el índice lineal basado en índices multidimensionales
|
||||||
|
linear_index = 0
|
||||||
|
dimension_multiplier = 1
|
||||||
|
|
||||||
|
# Calcular desde la dimensión más interna a la más externa
|
||||||
|
# Los índices en S7 comienzan en las dimensiones a la izquierda
|
||||||
|
for i in range(len(indices)-1, -1, -1):
|
||||||
|
# Ajustar por el índice inicial de cada dimensión
|
||||||
|
adjusted_index = indices[i] - dimensions[i].lower_bound
|
||||||
|
linear_index += adjusted_index * dimension_multiplier
|
||||||
|
# Multiplicador para la siguiente dimensión
|
||||||
|
if i > 0: # No es necesario para la última iteración
|
||||||
|
dimension_multiplier *= dimensions[i].count
|
||||||
|
|
||||||
|
# Para arrays de bits, tenemos que calcular bit por bit
|
||||||
|
if is_bit_array:
|
||||||
|
base_byte = int(var.byte_offset)
|
||||||
|
base_bit = int(round((var.byte_offset - base_byte) * 10))
|
||||||
|
|
||||||
|
# Calcular nuevo bit y byte
|
||||||
|
new_bit = base_bit + linear_index
|
||||||
|
new_byte = base_byte + (new_bit // 8)
|
||||||
|
new_bit_position = new_bit % 8
|
||||||
|
|
||||||
|
return float(new_byte) + (float(new_bit_position) / 10.0)
|
||||||
|
else:
|
||||||
|
# Para tipos regulares, simplemente sumamos el offset lineal
|
||||||
|
return var.byte_offset + (linear_index * element_size)
|
||||||
|
|
||||||
|
# Construir mapa de rutas a variables
|
||||||
|
def build_path_map(members: List[VariableInfo], prefix: str = ""):
|
||||||
|
for var in members:
|
||||||
|
var_path = f"{prefix}{var.name}"
|
||||||
|
path_to_var_map[var_path] = var
|
||||||
|
|
||||||
|
# Para arrays, inicializar diccionario de elementos si es necesario
|
||||||
|
if var.array_dimensions:
|
||||||
|
var.current_element_values = {}
|
||||||
|
|
||||||
|
# Para variables con hijos, procesar recursivamente
|
||||||
|
if var.children:
|
||||||
|
build_path_map(var.children, f"{var_path}.")
|
||||||
|
|
||||||
|
# Construir el mapa antes de procesar el bloque BEGIN
|
||||||
|
build_path_map(db_info.members)
|
||||||
|
|
||||||
|
# Ahora procesar el bloque BEGIN
|
||||||
while idx < len(lines):
|
while idx < len(lines):
|
||||||
original_line = lines[idx].strip(); line_to_parse = original_line
|
original_line = lines[idx].strip()
|
||||||
|
line_to_parse = original_line
|
||||||
comment_marker = original_line.find("//")
|
comment_marker = original_line.find("//")
|
||||||
if comment_marker != -1: line_to_parse = original_line[:comment_marker].strip()
|
if comment_marker != -1:
|
||||||
if self.end_db_regex.match(line_to_parse): return idx
|
line_to_parse = original_line[:comment_marker].strip()
|
||||||
|
|
||||||
|
if self.end_db_regex.match(line_to_parse):
|
||||||
|
break
|
||||||
|
|
||||||
idx += 1
|
idx += 1
|
||||||
if not line_to_parse: continue
|
if not line_to_parse:
|
||||||
|
continue
|
||||||
|
|
||||||
match = assignment_regex.match(line_to_parse)
|
match = assignment_regex.match(line_to_parse)
|
||||||
if match:
|
if match:
|
||||||
path, value = match.group("path").strip(), match.group("value").strip().rstrip(';').strip()
|
path, value = match.group("path").strip(), match.group("value").strip().rstrip(';').strip()
|
||||||
db_info._begin_block_assignments_ordered.append((path, value))
|
|
||||||
db_info._initial_values_from_begin_block[path] = value
|
# Distinguir entre asignación a elemento de array y variable normal
|
||||||
raise SyntaxError("Se esperaba END_DATA_BLOCK después de la sección BEGIN.")
|
if '[' in path and ']' in path:
|
||||||
|
# Es un elemento de array
|
||||||
|
array_path = path[:path.find('[')]
|
||||||
|
indices = path[path.find('[')+1:path.find(']')]
|
||||||
|
|
||||||
|
if array_path in path_to_var_map:
|
||||||
|
var = path_to_var_map[array_path]
|
||||||
|
if var.current_element_values is None:
|
||||||
|
var.current_element_values = {}
|
||||||
|
|
||||||
|
# Calcular y guardar el offset real del elemento
|
||||||
|
element_offset = calculate_array_element_offset(var, indices)
|
||||||
|
|
||||||
|
# Guardar como un objeto con valor y offset
|
||||||
|
var.current_element_values[indices] = {
|
||||||
|
"value": value,
|
||||||
|
"offset": element_offset
|
||||||
|
}
|
||||||
|
elif path in path_to_var_map:
|
||||||
|
# Es una variable normal (o array completo)
|
||||||
|
var = path_to_var_map[path]
|
||||||
|
var.current_value = value
|
||||||
|
|
||||||
|
# También manejar rutas jerárquicas (e.g., MyStruct.MyField)
|
||||||
|
if '.' in path and '[' not in path: # Para simplificar, excluimos arrays con path jerárquico
|
||||||
|
parts = path.split('.')
|
||||||
|
current_path = ""
|
||||||
|
current_var = None
|
||||||
|
|
||||||
|
# Navegar por la jerarquía
|
||||||
|
for i, part in enumerate(parts):
|
||||||
|
if current_path:
|
||||||
|
current_path += f".{part}"
|
||||||
|
else:
|
||||||
|
current_path = part
|
||||||
|
|
||||||
|
if current_path in path_to_var_map:
|
||||||
|
current_var = path_to_var_map[current_path]
|
||||||
|
|
||||||
|
# Si es el último componente, asignar valor
|
||||||
|
if i == len(parts) - 1 and current_var:
|
||||||
|
current_var.current_value = value
|
||||||
|
|
||||||
|
# Propagar valores iniciales a variables sin asignación explícita
|
||||||
|
def propagate_initial_values(members: List[VariableInfo]):
|
||||||
|
for var in members:
|
||||||
|
# Si no tiene current_value pero tiene initial_value, copiar
|
||||||
|
if var.current_value is None and var.initial_value is not None:
|
||||||
|
var.current_value = var.initial_value
|
||||||
|
|
||||||
|
# Recursión para hijos
|
||||||
|
if var.children:
|
||||||
|
propagate_initial_values(var.children)
|
||||||
|
|
||||||
|
# Propagar valores iniciales
|
||||||
|
propagate_initial_values(db_info.members)
|
||||||
|
|
||||||
|
return idx
|
||||||
|
|
||||||
def _apply_current_values(self, members: List[VariableInfo], begin_values: Dict[str, str], current_path_prefix: str = ""): # Sin cambios
|
def parse_file(self, filepath: str) -> ParsedData:
|
||||||
for var_info in members:
|
|
||||||
full_member_path = f"{current_path_prefix}{var_info.name}"
|
|
||||||
if var_info.array_dimensions:
|
|
||||||
var_info.current_element_values = {}
|
|
||||||
prefix_for_search = full_member_path + "["
|
|
||||||
for key_in_begin, val_in_begin in begin_values.items():
|
|
||||||
if key_in_begin.startswith(prefix_for_search) and key_in_begin.endswith("]"):
|
|
||||||
try:
|
|
||||||
indices_str = key_in_begin[len(prefix_for_search):-1]
|
|
||||||
var_info.current_element_values[indices_str] = val_in_begin
|
|
||||||
except: print(f"Advertencia: No se pudo parsear el índice para: {key_in_begin}")
|
|
||||||
if not var_info.current_element_values: var_info.current_element_values = None
|
|
||||||
if full_member_path in begin_values: var_info.current_value = begin_values[full_member_path]
|
|
||||||
elif full_member_path in begin_values: var_info.current_value = begin_values[full_member_path]
|
|
||||||
elif var_info.initial_value is not None: var_info.current_value = var_info.initial_value
|
|
||||||
if var_info.children and not var_info.is_udt_expanded_member:
|
|
||||||
self._apply_current_values(var_info.children, begin_values, f"{full_member_path}.")
|
|
||||||
elif var_info.udt_source_name and var_info.children:
|
|
||||||
self._apply_current_values(var_info.children, begin_values, f"{full_member_path}.")
|
|
||||||
|
|
||||||
def parse_file(self, filepath: str) -> ParsedData: # Sin cambios respecto a v_final_3
|
|
||||||
try:
|
try:
|
||||||
with open(filepath, 'r', encoding='utf-8-sig') as f: lines = f.readlines()
|
with open(filepath, 'r', encoding='utf-8-sig') as f: lines = f.readlines()
|
||||||
except Exception as e: print(f"Error al leer el archivo {filepath}: {e}"); return self.parsed_data
|
except Exception as e: print(f"Error al leer el archivo {filepath}: {e}"); return self.parsed_data
|
||||||
|
@ -334,42 +466,260 @@ class S7Parser: # Sin cambios en __init__ respecto a v_final_3
|
||||||
elif self.end_type_regex.match(line_to_parse) and isinstance(current_block_handler, UdtInfo):
|
elif self.end_type_regex.match(line_to_parse) and isinstance(current_block_handler, UdtInfo):
|
||||||
if current_block_handler.total_size_in_bytes == 0: current_block_handler.total_size_in_bytes = active_block_context.byte_offset
|
if current_block_handler.total_size_in_bytes == 0: current_block_handler.total_size_in_bytes = active_block_context.byte_offset
|
||||||
self.known_udts[current_block_handler.name] = current_block_handler
|
self.known_udts[current_block_handler.name] = current_block_handler
|
||||||
# print(f"Parsed UDT: {current_block_handler.name}, Size: {current_block_handler.total_size_in_bytes}b, Members: {len(current_block_handler.members)}")
|
|
||||||
current_block_handler = None; parsing_title_value_next_line = False
|
current_block_handler = None; parsing_title_value_next_line = False
|
||||||
elif self.end_db_regex.match(line_to_parse) and isinstance(current_block_handler, DbInfo):
|
elif self.end_db_regex.match(line_to_parse) and isinstance(current_block_handler, DbInfo):
|
||||||
if current_block_handler.total_size_in_bytes == 0 : current_block_handler.total_size_in_bytes = active_block_context.byte_offset
|
if current_block_handler.total_size_in_bytes == 0 : current_block_handler.total_size_in_bytes = active_block_context.byte_offset
|
||||||
self._apply_current_values(current_block_handler.members, current_block_handler._initial_values_from_begin_block)
|
# Ya no necesitamos aplicar valores, porque se aplican directamente en _parse_begin_block
|
||||||
# print(f"Parsed DB: {current_block_handler.name}, Decl.Size: {current_block_handler.total_size_in_bytes}b, Members: {len(current_block_handler.members)}, BEGIN assigns: {len(current_block_handler._begin_block_assignments_ordered)}")
|
|
||||||
current_block_handler = None; parsing_title_value_next_line = False
|
current_block_handler = None; parsing_title_value_next_line = False
|
||||||
idx += 1
|
idx += 1
|
||||||
return self.parsed_data
|
return self.parsed_data
|
||||||
|
|
||||||
def custom_json_serializer(obj: Any) -> Any:
|
def custom_json_serializer(obj: Any) -> Any:
|
||||||
if isinstance(obj, OffsetContext): return None
|
if isinstance(obj, OffsetContext): return None
|
||||||
# Manejar ArrayDimension explícitamente para incluir 'count'
|
|
||||||
if isinstance(obj, ArrayDimension):
|
if isinstance(obj, ArrayDimension):
|
||||||
return {
|
return {
|
||||||
'lower_bound': obj.lower_bound,
|
'lower_bound': obj.lower_bound,
|
||||||
'upper_bound': obj.upper_bound,
|
'upper_bound': obj.upper_bound,
|
||||||
'count': obj.count # La propiedad se calcula y se añade aquí
|
'count': obj.count
|
||||||
}
|
}
|
||||||
if hasattr(obj, '__dict__'):
|
if hasattr(obj, '__dict__'):
|
||||||
d = {k: v for k, v in obj.__dict__.items()
|
d = {k: v for k, v in obj.__dict__.items()
|
||||||
if not (v is None or (isinstance(v, list) and not v))} # No filtrar _initial_values_from_begin_block
|
if not (v is None or (isinstance(v, list) and not v))}
|
||||||
|
|
||||||
if isinstance(obj, VariableInfo):
|
if isinstance(obj, VariableInfo):
|
||||||
if not obj.is_udt_expanded_member and 'is_udt_expanded_member' not in d :
|
if not obj.is_udt_expanded_member and 'is_udt_expanded_member' not in d:
|
||||||
d['is_udt_expanded_member'] = False
|
d['is_udt_expanded_member'] = False
|
||||||
if not obj.current_element_values and 'current_element_values' in d:
|
|
||||||
del d['current_element_values']
|
# Manejar current_element_values con format especial para offsets
|
||||||
if isinstance(obj, DbInfo): # Asegurar que las listas vacías no se omitan si el campo existe
|
if 'current_element_values' in d:
|
||||||
if '_begin_block_assignments_ordered' not in d and obj._begin_block_assignments_ordered == []:
|
if not d['current_element_values']:
|
||||||
d['_begin_block_assignments_ordered'] = [] # Mantener lista vacía si es el caso
|
del d['current_element_values']
|
||||||
if '_initial_values_from_begin_block' not in d and obj._initial_values_from_begin_block == {}:
|
else:
|
||||||
d['_initial_values_from_begin_block'] = {} # Mantener dict vacío si es el caso
|
# Asegurar que current_element_values se serializa correctamente
|
||||||
|
element_values = d['current_element_values']
|
||||||
|
if isinstance(element_values, dict):
|
||||||
|
# Preservar el formato {índice: {value, offset}}
|
||||||
|
d['current_element_values'] = element_values
|
||||||
|
|
||||||
return d
|
return d
|
||||||
raise TypeError(f"Object of type {obj.__class__.__name__} is not JSON serializable: {type(obj)}")
|
raise TypeError(f"Object of type {obj.__class__.__name__} is not JSON serializable: {type(obj)}")
|
||||||
|
|
||||||
|
|
||||||
|
def format_address_for_display(byte_offset: float, bit_size: int = 0) -> str:
|
||||||
|
"""
|
||||||
|
Formatea correctamente la dirección para mostrar, preservando el índice del bit para BOOLs.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
byte_offset: El offset en bytes (con parte decimal para bits)
|
||||||
|
bit_size: Tamaño en bits (>0 para BOOLs)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
String formateado como "X.Y" para bits o "X" para bytes completos
|
||||||
|
"""
|
||||||
|
if bit_size > 0:
|
||||||
|
# Para BOOL, extraer y mostrar el byte y bit exactos
|
||||||
|
byte_part = int(byte_offset)
|
||||||
|
# Multiplicamos por 10 y tomamos el entero para obtener el índice correcto del bit
|
||||||
|
bit_part = int(round((byte_offset - byte_part) * 10))
|
||||||
|
return f"{byte_part}.{bit_part}"
|
||||||
|
else:
|
||||||
|
# Para otros tipos, mostrar como entero si es un byte completo
|
||||||
|
if byte_offset == float(int(byte_offset)):
|
||||||
|
return str(int(byte_offset))
|
||||||
|
return f"{byte_offset:.1f}"
|
||||||
|
|
||||||
|
def compare_offsets(offset1: float, offset2: float) -> int:
|
||||||
|
"""
|
||||||
|
Compara dos offsets considerando tanto la parte del byte como la del bit.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
-1 si offset1 < offset2, 0 si son iguales, 1 si offset1 > offset2
|
||||||
|
"""
|
||||||
|
# Extraer partes de byte y bit
|
||||||
|
byte1 = int(offset1)
|
||||||
|
bit1 = int(round((offset1 - byte1) * 10))
|
||||||
|
|
||||||
|
byte2 = int(offset2)
|
||||||
|
bit2 = int(round((offset2 - byte2) * 10))
|
||||||
|
|
||||||
|
# Comparar primero por byte
|
||||||
|
if byte1 < byte2:
|
||||||
|
return -1
|
||||||
|
elif byte1 > byte2:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
# Si bytes son iguales, comparar por bit
|
||||||
|
if bit1 < bit2:
|
||||||
|
return -1
|
||||||
|
elif bit1 > bit2:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
# Son exactamente iguales
|
||||||
|
return 0
|
||||||
|
|
||||||
|
def calculate_array_element_offset(var: VariableInfo, indices_str: str) -> float:
|
||||||
|
"""
|
||||||
|
Calcula el offset exacto para un elemento de array basado en sus índices.
|
||||||
|
Maneja correctamente arrays de bits y multidimensionales.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
var: Variable información del array
|
||||||
|
indices_str: String de índices (e.g. "1,2" para array bidimensional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Offset calculado como float, con parte decimal para bits
|
||||||
|
"""
|
||||||
|
# Parsear los índices (pueden ser múltiples para arrays multidimensionales)
|
||||||
|
indices = [int(idx.strip()) for idx in indices_str.split(',')]
|
||||||
|
|
||||||
|
# Obtener dimensiones del array
|
||||||
|
dimensions = var.array_dimensions
|
||||||
|
if not dimensions or len(indices) != len(dimensions):
|
||||||
|
return var.byte_offset # No podemos calcular, devolver offset base
|
||||||
|
|
||||||
|
# Determinar tamaño de cada elemento base
|
||||||
|
element_size = 0
|
||||||
|
is_bit_array = False
|
||||||
|
|
||||||
|
if var.data_type.upper() == "BOOL":
|
||||||
|
is_bit_array = True
|
||||||
|
element_size = 0.1 # 0.1 byte = 1 bit (representación decimal)
|
||||||
|
elif var.data_type.upper() == "STRING" and var.string_length is not None:
|
||||||
|
element_size = var.string_length + 2 # Para strings, sumar 2 bytes de cabecera
|
||||||
|
else:
|
||||||
|
# Para tipos primitivos y UDTs
|
||||||
|
data_type_upper = var.data_type.upper()
|
||||||
|
if data_type_upper in S7_PRIMITIVE_SIZES:
|
||||||
|
element_size = S7_PRIMITIVE_SIZES[data_type_upper][0]
|
||||||
|
elif var.data_type in self.known_udts:
|
||||||
|
element_size = self.known_udts[var.data_type].total_size_in_bytes
|
||||||
|
else:
|
||||||
|
# Si no podemos determinar tamaño, usar tamaño total / elementos
|
||||||
|
total_elements = 1
|
||||||
|
for dim in dimensions:
|
||||||
|
total_elements *= dim.count
|
||||||
|
if total_elements > 0 and var.size_in_bytes > 0:
|
||||||
|
element_size = var.size_in_bytes / total_elements
|
||||||
|
|
||||||
|
# Calcular offset para arrays multidimensionales
|
||||||
|
# En S7, los arrays se almacenan en orden Row-Major (la última dimensión varía más rápido)
|
||||||
|
linear_index = 0
|
||||||
|
dimension_multiplier = 1
|
||||||
|
|
||||||
|
# Calcular desde la dimensión más interna a la más externa
|
||||||
|
# Para S7, procesamos desde la última dimensión hacia la primera
|
||||||
|
for i in range(len(indices)-1, -1, -1):
|
||||||
|
# Ajustar por el índice inicial de cada dimensión
|
||||||
|
adjusted_index = indices[i] - dimensions[i].lower_bound
|
||||||
|
linear_index += adjusted_index * dimension_multiplier
|
||||||
|
# Multiplicador para la siguiente dimensión
|
||||||
|
if i > 0: # No es necesario para la última iteración
|
||||||
|
dimension_multiplier *= dimensions[i].count
|
||||||
|
|
||||||
|
# Calcular offset según tipo
|
||||||
|
if is_bit_array:
|
||||||
|
# Para arrays de bits, calcular bit por bit
|
||||||
|
base_byte = int(var.byte_offset)
|
||||||
|
base_bit = int(round((var.byte_offset - base_byte) * 10))
|
||||||
|
|
||||||
|
# Calcular nuevo bit y byte
|
||||||
|
new_bit = base_bit + linear_index
|
||||||
|
new_byte = base_byte + (new_bit // 8)
|
||||||
|
new_bit_position = new_bit % 8
|
||||||
|
|
||||||
|
# Formato S7: byte.bit con bit de 0-7
|
||||||
|
return float(new_byte) + (float(new_bit_position) / 10.0)
|
||||||
|
else:
|
||||||
|
# Para tipos regulares, simplemente sumar el offset lineal * tamaño elemento
|
||||||
|
return var.byte_offset + (linear_index * element_size)
|
||||||
|
|
||||||
|
def flatten_db_structure(db_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Función genérica que aplana completamente una estructura de DB/UDT,
|
||||||
|
expandiendo todas las variables anidadas, UDTs y elementos de array.
|
||||||
|
Garantiza ordenamiento estricto por offset (byte.bit).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[Dict]: Lista de variables aplanadas con todos sus atributos
|
||||||
|
y un path completo, ordenada por offset estricto.
|
||||||
|
"""
|
||||||
|
flat_variables = []
|
||||||
|
processed_ids = set() # Para evitar duplicados
|
||||||
|
|
||||||
|
def process_variable(var: Dict[str, Any], path_prefix: str = "", is_expansion: bool = False):
|
||||||
|
# Identificador único para esta variable en este contexto
|
||||||
|
var_id = f"{path_prefix}{var['name']}_{var['byte_offset']}"
|
||||||
|
|
||||||
|
# Evitar procesar duplicados (como miembros expandidos de UDTs)
|
||||||
|
if is_expansion and var_id in processed_ids:
|
||||||
|
return
|
||||||
|
if is_expansion:
|
||||||
|
processed_ids.add(var_id)
|
||||||
|
|
||||||
|
# Crear copia de la variable con path completo
|
||||||
|
flat_var = var.copy()
|
||||||
|
flat_var["full_path"] = f"{path_prefix}{var['name']}"
|
||||||
|
flat_var["is_array_element"] = False # Por defecto no es elemento de array
|
||||||
|
|
||||||
|
# Determinar si es array con valores específicos
|
||||||
|
is_array = bool(var.get("array_dimensions"))
|
||||||
|
has_array_values = is_array and var.get("current_element_values")
|
||||||
|
|
||||||
|
# Si no es un array con valores específicos, agregar la variable base
|
||||||
|
if not has_array_values:
|
||||||
|
# Asegurarse de que el offset esté en el formato correcto
|
||||||
|
flat_var["address_display"] = format_address_for_display(var["byte_offset"], var.get("bit_size", 0))
|
||||||
|
flat_variables.append(flat_var)
|
||||||
|
|
||||||
|
# Si es array con valores específicos, expandir cada elemento como variable individual
|
||||||
|
if has_array_values:
|
||||||
|
for idx, element_data in var.get("current_element_values", {}).items():
|
||||||
|
# Extraer valor y offset del elemento
|
||||||
|
if isinstance(element_data, dict) and "value" in element_data and "offset" in element_data:
|
||||||
|
# Nuevo formato con offset calculado
|
||||||
|
value = element_data["value"]
|
||||||
|
element_offset = element_data["offset"]
|
||||||
|
else:
|
||||||
|
# Compatibilidad con formato antiguo
|
||||||
|
value = element_data
|
||||||
|
element_offset = var["byte_offset"] # Offset base
|
||||||
|
|
||||||
|
# Crear una entrada por cada elemento del array
|
||||||
|
array_element = var.copy()
|
||||||
|
array_element["full_path"] = f"{path_prefix}{var['name']}[{idx}]"
|
||||||
|
array_element["is_array_element"] = True
|
||||||
|
array_element["array_index"] = idx
|
||||||
|
array_element["current_value"] = value
|
||||||
|
array_element["byte_offset"] = element_offset # Usar offset calculado
|
||||||
|
array_element["address_display"] = format_address_for_display(element_offset, var.get("bit_size", 0))
|
||||||
|
|
||||||
|
# Eliminar current_element_values para evitar redundancia
|
||||||
|
if "current_element_values" in array_element:
|
||||||
|
del array_element["current_element_values"]
|
||||||
|
|
||||||
|
flat_variables.append(array_element)
|
||||||
|
|
||||||
|
# Procesar recursivamente todos los hijos
|
||||||
|
if var.get("children"):
|
||||||
|
for child in var.get("children", []):
|
||||||
|
process_variable(
|
||||||
|
child,
|
||||||
|
f"{path_prefix}{var['name']}.",
|
||||||
|
is_expansion=bool(var.get("udt_source_name"))
|
||||||
|
)
|
||||||
|
|
||||||
|
# Procesar todos los miembros desde el nivel superior
|
||||||
|
for member in db_info.get("members", []):
|
||||||
|
process_variable(member)
|
||||||
|
|
||||||
|
# Ordenar estrictamente por offset byte.bit
|
||||||
|
flat_variables.sort(key=lambda x: (
|
||||||
|
int(x["byte_offset"]),
|
||||||
|
int(round((x["byte_offset"] - int(x["byte_offset"])) * 10))
|
||||||
|
))
|
||||||
|
|
||||||
|
return flat_variables
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
working_dir = find_working_directory()
|
working_dir = find_working_directory()
|
||||||
print(f"Using working directory: {working_dir}")
|
print(f"Using working directory: {working_dir}")
|
||||||
|
@ -388,7 +738,7 @@ if __name__ == "__main__":
|
||||||
print(f"Archivos encontrados para procesar: {len(all_source_files)}")
|
print(f"Archivos encontrados para procesar: {len(all_source_files)}")
|
||||||
|
|
||||||
for filepath in all_source_files:
|
for filepath in all_source_files:
|
||||||
parser = S7Parser() # Nueva instancia para cada archivo para evitar estados residuales
|
parser = S7Parser()
|
||||||
filename = os.path.basename(filepath)
|
filename = os.path.basename(filepath)
|
||||||
print(f"\n--- Procesando archivo: {filename} ---")
|
print(f"\n--- Procesando archivo: {filename} ---")
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,10 @@
|
||||||
# --- x4.py (Modificaciones v_final_2) ---
|
# --- x4_refactored.py ---
|
||||||
import json
|
import json
|
||||||
from typing import List, Dict, Any
|
from typing import List, Dict, Any
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import glob # Para buscar archivos JSON
|
import glob
|
||||||
|
from x3 import flatten_db_structure
|
||||||
|
|
||||||
script_root = os.path.dirname(
|
script_root = os.path.dirname(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||||
|
@ -19,7 +20,6 @@ def find_working_directory():
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
return working_directory
|
return working_directory
|
||||||
|
|
||||||
# format_data_type_for_source (sin cambios respecto a la v5 que te di antes)
|
|
||||||
def format_data_type_for_source(var_info: Dict[str, Any]) -> str:
|
def format_data_type_for_source(var_info: Dict[str, Any]) -> str:
|
||||||
base_type = var_info.get("udt_source_name") if var_info.get("udt_source_name") else var_info["data_type"]
|
base_type = var_info.get("udt_source_name") if var_info.get("udt_source_name") else var_info["data_type"]
|
||||||
type_str = ""
|
type_str = ""
|
||||||
|
@ -45,7 +45,7 @@ def generate_variable_declaration_for_source(var_info: Dict[str, Any], indent_le
|
||||||
is_multiline_struct_def = (var_info["data_type"].upper() == "STRUCT" and \
|
is_multiline_struct_def = (var_info["data_type"].upper() == "STRUCT" and \
|
||||||
not var_info.get("udt_source_name") and \
|
not var_info.get("udt_source_name") and \
|
||||||
var_info.get("children"))
|
var_info.get("children"))
|
||||||
if not is_multiline_struct_def: # Solo añadir ; si no es una cabecera de STRUCT multilínea
|
if not is_multiline_struct_def:
|
||||||
line += ';'
|
line += ';'
|
||||||
|
|
||||||
if var_info.get("comment"):
|
if var_info.get("comment"):
|
||||||
|
@ -60,103 +60,112 @@ def generate_struct_members_for_source(members: List[Dict[str, Any]], indent_lev
|
||||||
not var_info.get("udt_source_name") and \
|
not var_info.get("udt_source_name") and \
|
||||||
var_info.get("children"):
|
var_info.get("children"):
|
||||||
current_indent_str = " " * indent_level
|
current_indent_str = " " * indent_level
|
||||||
lines.append(f'{current_indent_str}{var_info["name"]} : STRUCT') # SIN ;
|
lines.append(f'{current_indent_str}{var_info["name"]} : STRUCT')
|
||||||
lines.extend(generate_struct_members_for_source(var_info["children"], indent_level + 1))
|
lines.extend(generate_struct_members_for_source(var_info["children"], indent_level + 1))
|
||||||
lines.append(f'{current_indent_str}END_STRUCT;') # CON ;
|
lines.append(f'{current_indent_str}END_STRUCT;')
|
||||||
else:
|
else:
|
||||||
lines.append(generate_variable_declaration_for_source(var_info, indent_level))
|
lines.append(generate_variable_declaration_for_source(var_info, indent_level))
|
||||||
return lines
|
return lines
|
||||||
|
|
||||||
def generate_begin_block_assignments(db_info: Dict[str, Any], indent_level: int) -> List[str]:
|
def generate_begin_block_assignments(db_info: Dict[str, Any], indent_level: int) -> List[str]:
|
||||||
|
"""
|
||||||
|
Genera asignaciones del bloque BEGIN para todas las variables con valores actuales,
|
||||||
|
ordenadas estrictamente por offset (byte.bit).
|
||||||
|
"""
|
||||||
indent_str = " " * indent_level
|
indent_str = " " * indent_level
|
||||||
lines = []
|
lines = []
|
||||||
# Usar la lista ordenada de asignaciones del JSON, que x3.py ahora debería poblar
|
|
||||||
ordered_assignments = db_info.get("_begin_block_assignments_ordered")
|
# Obtener todas las variables aplanadas y ordenadas
|
||||||
|
flat_vars = flatten_db_structure(db_info)
|
||||||
if ordered_assignments and isinstance(ordered_assignments, list):
|
|
||||||
print(f"INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB '{db_info['name']}'.")
|
# Para cada variable en el orden correcto, generar la asignación
|
||||||
for path, value_obj in ordered_assignments:
|
for var in flat_vars:
|
||||||
value_str = str(value_obj)
|
# Verificar que tenga un valor actual para asignar
|
||||||
|
if var.get("current_value") is not None:
|
||||||
|
value_str = str(var["current_value"])
|
||||||
|
# Convertir valores booleanos a TRUE/FALSE según estándar S7
|
||||||
if value_str.lower() == "true": value_str = "TRUE"
|
if value_str.lower() == "true": value_str = "TRUE"
|
||||||
elif value_str.lower() == "false": value_str = "FALSE"
|
elif value_str.lower() == "false": value_str = "FALSE"
|
||||||
lines.append(f"{indent_str}{path} := {value_str};") # Asignaciones siempre con ;
|
|
||||||
else:
|
# Generar la línea de asignación
|
||||||
print(f"ADVERTENCIA: '_begin_block_assignments_ordered' no encontrado para DB '{db_info['name']}'. "
|
lines.append(f"{indent_str}{var['full_path']} := {value_str};")
|
||||||
"El bloque BEGIN puede estar incompleto o desordenado si se usa el fallback.")
|
|
||||||
# (Aquí podría ir el fallback a _generate_assignments_recursive_from_current_values si se desea)
|
return lines
|
||||||
# fallback_lines = _generate_assignments_recursive_from_current_values(db_info.get("members", []), "", indent_str)
|
|
||||||
# if fallback_lines: lines.extend(fallback_lines)
|
|
||||||
|
|
||||||
|
def generate_markdown_table(db_info: Dict[str, Any]) -> List[str]:
|
||||||
|
"""
|
||||||
|
Genera una tabla markdown completa con offsets de bits correctos.
|
||||||
|
"""
|
||||||
|
lines = []
|
||||||
|
lines.append(f"## Documentación para DB: {db_info['name']}")
|
||||||
|
lines.append("")
|
||||||
|
lines.append("| Address | Name | Type | Initial Value | Actual Value | Comment |")
|
||||||
|
lines.append("|---|---|---|---|---|---|")
|
||||||
|
|
||||||
|
# Obtener todas las variables aplanadas (ya ordenadas por offset)
|
||||||
|
flat_vars = flatten_db_structure(db_info)
|
||||||
|
|
||||||
|
# Mostrar todas las variables, incluyendo elementos de array
|
||||||
|
for var in flat_vars:
|
||||||
|
# Usar el address_display pre-calculado
|
||||||
|
address = var["address_display"]
|
||||||
|
name_for_display = var["full_path"]
|
||||||
|
|
||||||
|
# Formatear tipo adecuadamente según sea variable normal o elemento de array
|
||||||
|
if var.get("is_array_element"):
|
||||||
|
# Para elementos de array, mostrar solo el tipo base
|
||||||
|
if "array_dimensions" in var:
|
||||||
|
# Si todavía tenemos información de array, eliminar la parte ARRAY[..]
|
||||||
|
base_type = var["data_type"]
|
||||||
|
data_type_str = base_type
|
||||||
|
else:
|
||||||
|
data_type_str = var["data_type"]
|
||||||
|
else:
|
||||||
|
# Para variables normales, mostrar tipo completo
|
||||||
|
data_type_str = format_data_type_for_source(var)
|
||||||
|
|
||||||
|
# Formatear valores para la tabla
|
||||||
|
initial_value = str(var.get("initial_value", "")).replace("|", "\\|").replace("\n", " ")
|
||||||
|
actual_value = str(var.get("current_value", "")).replace("|", "\\|").replace("\n", " ")
|
||||||
|
comment = str(var.get("comment", "")).replace("|", "\\|").replace("\n", " ")
|
||||||
|
|
||||||
|
lines.append(f"| {address} | {name_for_display} | {data_type_str} | {initial_value} | {actual_value} | {comment} |")
|
||||||
|
|
||||||
return lines
|
return lines
|
||||||
|
|
||||||
def generate_s7_source_code_lines(data: Dict[str, Any]) -> List[str]:
|
def generate_s7_source_code_lines(data: Dict[str, Any]) -> List[str]:
|
||||||
lines = []
|
lines = []
|
||||||
for udt in data.get("udts", []):
|
for udt in data.get("udts", []):
|
||||||
lines.append(f'TYPE "{udt["name"]}"')
|
lines.append(f'TYPE "{udt["name"]}"')
|
||||||
if udt.get("family"): lines.append(f' FAMILY : {udt["family"]}') # SIN ;
|
if udt.get("family"): lines.append(f' FAMILY : {udt["family"]}')
|
||||||
if udt.get("version"): lines.append(f' VERSION : {udt["version"]}') # SIN ;
|
if udt.get("version"): lines.append(f' VERSION : {udt["version"]}')
|
||||||
lines.append("")
|
lines.append("")
|
||||||
lines.append(" STRUCT") # SIN ;
|
lines.append(" STRUCT")
|
||||||
lines.extend(generate_struct_members_for_source(udt["members"], 2))
|
lines.extend(generate_struct_members_for_source(udt["members"], 2))
|
||||||
lines.append(" END_STRUCT;") # CON ;
|
lines.append(" END_STRUCT;")
|
||||||
lines.append(f'END_TYPE') # SIN ; según tu último comentario
|
lines.append(f'END_TYPE')
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
for db in data.get("dbs", []):
|
for db in data.get("dbs", []):
|
||||||
lines.append(f'DATA_BLOCK "{db["name"]}"')
|
lines.append(f'DATA_BLOCK "{db["name"]}"')
|
||||||
if db.get("title"): # TITLE = { ... } va tal cual y SIN ;
|
if db.get("title"):
|
||||||
lines.append(f' TITLE = {db["title"]}')
|
lines.append(f' TITLE = {db["title"]}')
|
||||||
if db.get("family"): lines.append(f' FAMILY : {db["family"]}') # SIN ;
|
if db.get("family"): lines.append(f' FAMILY : {db["family"]}')
|
||||||
if db.get("version"): lines.append(f' VERSION : {db["version"]}') # SIN ;
|
if db.get("version"): lines.append(f' VERSION : {db["version"]}')
|
||||||
lines.append("")
|
lines.append("")
|
||||||
lines.append(" STRUCT") # SIN ;
|
lines.append(" STRUCT")
|
||||||
lines.extend(generate_struct_members_for_source(db["members"], 2))
|
lines.extend(generate_struct_members_for_source(db["members"], 2))
|
||||||
lines.append(" END_STRUCT;") # CON ;
|
lines.append(" END_STRUCT;")
|
||||||
|
|
||||||
begin_assignments = generate_begin_block_assignments(db, 1) # Indentación 1 para las asignaciones
|
begin_assignments = generate_begin_block_assignments(db, 1)
|
||||||
if begin_assignments:
|
if begin_assignments:
|
||||||
lines.append("BEGIN") # SIN ;
|
lines.append("BEGIN")
|
||||||
lines.extend(begin_assignments)
|
lines.extend(begin_assignments)
|
||||||
|
|
||||||
lines.append(f'END_DATA_BLOCK') # SIN ; según tu último comentario
|
lines.append(f'END_DATA_BLOCK')
|
||||||
lines.append("")
|
lines.append("")
|
||||||
return lines
|
return lines
|
||||||
|
|
||||||
# generate_markdown_table (sin cambios respecto a la v5)
|
|
||||||
def generate_markdown_table(db_info: Dict[str, Any]) -> List[str]:
|
|
||||||
lines = []
|
|
||||||
lines.append(f"## Documentación para DB: {db_info['name']}") # Cambiado a H2 para múltiples DBs por archivo
|
|
||||||
lines.append("")
|
|
||||||
lines.append("| Address | Name | Type | Initial Value | Actual Value | Comment |")
|
|
||||||
lines.append("|---|---|---|---|---|---|")
|
|
||||||
processed_expanded_members = set()
|
|
||||||
def flatten_members_for_markdown(members: List[Dict[str, Any]], prefix: str = "", base_offset: float = 0.0, is_expansion: bool = False):
|
|
||||||
md_lines = []
|
|
||||||
for var_idx, var in enumerate(members):
|
|
||||||
member_id = f"{prefix}{var['name']}_{var_idx}"
|
|
||||||
if is_expansion and member_id in processed_expanded_members: continue
|
|
||||||
if is_expansion: processed_expanded_members.add(member_id)
|
|
||||||
name_for_display = f"{prefix}{var['name']}"
|
|
||||||
address = f"{var['byte_offset']:.1f}" if isinstance(var['byte_offset'], float) else str(var['byte_offset'])
|
|
||||||
if var.get("bit_size", 0) > 0 and isinstance(var['byte_offset'], float) and var['byte_offset'] != int(var['byte_offset']): pass
|
|
||||||
elif var.get("bit_size", 0) > 0 : address = f"{int(var['byte_offset'])}.0"
|
|
||||||
data_type_str = format_data_type_for_source(var)
|
|
||||||
initial_value = str(var.get("initial_value", "")).replace("|", "\\|").replace("\n", " ")
|
|
||||||
actual_value = str(var.get("current_value", "")).replace("|", "\\|").replace("\n", " ")
|
|
||||||
comment = str(var.get("comment", "")).replace("|", "\\|").replace("\n", " ")
|
|
||||||
is_struct_container = var["data_type"].upper() == "STRUCT" and not var.get("udt_source_name") and var.get("children")
|
|
||||||
is_udt_instance_container = bool(var.get("udt_source_name")) and var.get("children")
|
|
||||||
if not is_struct_container and not is_udt_instance_container or var.get("is_udt_expanded_member"):
|
|
||||||
md_lines.append(f"| {address} | {name_for_display} | {data_type_str} | {initial_value} | {actual_value} | {comment} |")
|
|
||||||
if var.get("children"):
|
|
||||||
md_lines.extend(flatten_members_for_markdown(var["children"],
|
|
||||||
f"{name_for_display}.",
|
|
||||||
var['byte_offset'],
|
|
||||||
is_expansion=bool(var.get("udt_source_name"))))
|
|
||||||
return md_lines
|
|
||||||
lines.extend(flatten_members_for_markdown(db_info.get("members", [])))
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
working_dir = find_working_directory()
|
working_dir = find_working_directory()
|
||||||
print(f"Using working directory: {working_dir}")
|
print(f"Using working directory: {working_dir}")
|
||||||
|
@ -188,7 +197,7 @@ def main():
|
||||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
||||||
continue # Saltar al siguiente archivo JSON
|
continue
|
||||||
|
|
||||||
# Generar archivo S7 (.txt)
|
# Generar archivo S7 (.txt)
|
||||||
s7_code_lines = generate_s7_source_code_lines(data_from_json)
|
s7_code_lines = generate_s7_source_code_lines(data_from_json)
|
||||||
|
@ -209,11 +218,11 @@ def main():
|
||||||
|
|
||||||
for db_index, db_to_document in enumerate(data_from_json["dbs"]):
|
for db_index, db_to_document in enumerate(data_from_json["dbs"]):
|
||||||
if db_index > 0:
|
if db_index > 0:
|
||||||
all_db_markdown_lines.append("\n---\n") # Separador visual entre DBs
|
all_db_markdown_lines.append("\n---\n")
|
||||||
|
|
||||||
markdown_lines_for_one_db = generate_markdown_table(db_to_document)
|
markdown_lines_for_one_db = generate_markdown_table(db_to_document)
|
||||||
all_db_markdown_lines.extend(markdown_lines_for_one_db)
|
all_db_markdown_lines.extend(markdown_lines_for_one_db)
|
||||||
all_db_markdown_lines.append("") # Espacio después de cada tabla de DB
|
all_db_markdown_lines.append("")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
||||||
|
@ -224,7 +233,6 @@ def main():
|
||||||
print(f"Error al escribir el archivo Markdown {md_output_filename}: {e}")
|
print(f"Error al escribir el archivo Markdown {md_output_filename}: {e}")
|
||||||
else:
|
else:
|
||||||
print(f"No se encontraron DBs en {current_json_filename} para generar documentación Markdown.")
|
print(f"No se encontraron DBs en {current_json_filename} para generar documentación Markdown.")
|
||||||
# Opcionalmente, crear un archivo MD con un mensaje
|
|
||||||
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
||||||
f.write(f"# Documentación S7 para {json_filename_base}\n\n_Fuente JSON: {current_json_filename}_\n\nNo se encontraron Bloques de Datos (DBs) en este archivo JSON.\n")
|
f.write(f"# Documentación S7 para {json_filename_base}\n\n_Fuente JSON: {current_json_filename}_\n\nNo se encontraron Bloques de Datos (DBs) en este archivo JSON.\n")
|
||||||
print(f"Archivo Markdown generado (sin DBs): {md_output_filename}")
|
print(f"Archivo Markdown generado (sin DBs): {md_output_filename}")
|
||||||
|
|
|
@ -1,9 +1,10 @@
|
||||||
|
# --- x5_refactored.py ---
|
||||||
import json
|
import json
|
||||||
from typing import List, Dict, Any, Optional
|
from typing import List, Dict, Any, Optional
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import glob # Para buscar archivos JSON
|
import glob
|
||||||
from datetime import datetime # Mover import al inicio
|
from datetime import datetime
|
||||||
|
|
||||||
script_root = os.path.dirname(
|
script_root = os.path.dirname(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||||
|
@ -11,6 +12,10 @@ script_root = os.path.dirname(
|
||||||
sys.path.append(script_root)
|
sys.path.append(script_root)
|
||||||
from backend.script_utils import load_configuration
|
from backend.script_utils import load_configuration
|
||||||
|
|
||||||
|
# Importar funciones comunes desde x3
|
||||||
|
from x3 import flatten_db_structure, format_address_for_display
|
||||||
|
from x4 import format_data_type_for_source
|
||||||
|
|
||||||
def find_working_directory():
|
def find_working_directory():
|
||||||
configs = load_configuration()
|
configs = load_configuration()
|
||||||
working_directory = configs.get("working_directory")
|
working_directory = configs.get("working_directory")
|
||||||
|
@ -19,94 +24,85 @@ def find_working_directory():
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
return working_directory
|
return working_directory
|
||||||
|
|
||||||
def format_data_type_for_display(var_info: Dict[str, Any]) -> str:
|
|
||||||
"""Formatea la declaración de tipo para visualización en Markdown."""
|
|
||||||
base_type = var_info.get("udt_source_name") if var_info.get("udt_source_name") else var_info["data_type"]
|
|
||||||
|
|
||||||
type_str = ""
|
|
||||||
if var_info.get("array_dimensions"):
|
|
||||||
dims_str = ",".join([f"{d['lower_bound']}..{d['upper_bound']}" for d in var_info["array_dimensions"]])
|
|
||||||
type_str += f"ARRAY [{dims_str}] OF "
|
|
||||||
|
|
||||||
type_str += base_type
|
|
||||||
|
|
||||||
if var_info["data_type"].upper() == "STRING" and var_info.get("string_length") is not None:
|
|
||||||
type_str += f"[{var_info['string_length']}]"
|
|
||||||
|
|
||||||
return type_str
|
|
||||||
|
|
||||||
def format_offset_for_display(byte_offset: float) -> str:
|
|
||||||
"""Formatea el offset como X.Y o solo X si es .0."""
|
|
||||||
if byte_offset == float(int(byte_offset)):
|
|
||||||
return str(int(byte_offset))
|
|
||||||
return f"{byte_offset:.1f}"
|
|
||||||
|
|
||||||
def generate_members_table_md(
|
def generate_members_table_md(
|
||||||
members: List[Dict[str, Any]],
|
db_info: Dict[str, Any],
|
||||||
path_prefix: str = "",
|
|
||||||
is_udt_definition: bool = False,
|
is_udt_definition: bool = False,
|
||||||
include_current_value: bool = False
|
include_current_value: bool = True
|
||||||
) -> List[str]:
|
) -> List[str]:
|
||||||
"""Genera líneas de tabla Markdown para una lista de miembros."""
|
"""
|
||||||
md_lines = []
|
Genera tabla markdown para todos los miembros usando las funciones de aplanamiento de x3.
|
||||||
for var_info in members:
|
"""
|
||||||
name_display = f"{path_prefix}{var_info['name']}"
|
lines = []
|
||||||
|
|
||||||
|
# Definir columnas de la tabla
|
||||||
|
if include_current_value:
|
||||||
|
header = "| Nombre Miembro (Ruta) | Tipo de Dato | Offset (Byte.Bit) | Tamaño (Bytes) | Tamaño (Bits) | Valor Inicial (Decl.) | Valor Actual (Efectivo) | Comentario |"
|
||||||
|
separator = "|---|---|---|---|---|---|---|---|"
|
||||||
|
else:
|
||||||
|
header = "| Nombre Miembro | Tipo de Dato | Offset (Byte.Bit) | Tamaño (Bytes) | Tamaño (Bits) | Valor Inicial | Comentario |"
|
||||||
|
separator = "|---|---|---|---|---|---|---|"
|
||||||
|
|
||||||
|
lines.append(header)
|
||||||
|
lines.append(separator)
|
||||||
|
|
||||||
|
# Usar la función de aplanamiento importada
|
||||||
|
flat_vars = flatten_db_structure(db_info)
|
||||||
|
|
||||||
|
# Generar filas para cada variable
|
||||||
|
for var in flat_vars:
|
||||||
|
# Usar la dirección formateada desde flatten_db_structure
|
||||||
|
address = var.get("address_display", format_address_for_display(var["byte_offset"], var.get("bit_size", 0)))
|
||||||
|
name_display = f"`{var['full_path']}`"
|
||||||
|
data_type_display = f"`{format_data_type_for_source(var)}`"
|
||||||
|
size_bytes_display = str(var.get('size_in_bytes', '0'))
|
||||||
|
bit_size_display = str(var.get('bit_size', '0')) if var.get('bit_size', 0) > 0 else ""
|
||||||
|
|
||||||
# Para miembros expandidos de un UDT, su nombre ya está completo en la jerarquía del JSON.
|
initial_value = str(var.get('initial_value', '')).replace("|", "\\|").replace("\n", " ")
|
||||||
# La recursión ya habrá construido el path_prefix.
|
initial_value_display = f"`{initial_value}`" if initial_value else ""
|
||||||
# No necesitamos hacer nada especial aquí si `is_udt_expanded_member` es true,
|
|
||||||
# ya que esta función se llama recursivamente sobre `children`.
|
comment_display = str(var.get('comment', '')).replace("|", "\\|").replace("\n", " ")
|
||||||
|
|
||||||
data_type_display = format_data_type_for_display(var_info)
|
|
||||||
offset_display = format_offset_for_display(var_info['byte_offset'])
|
|
||||||
size_bytes_display = str(var_info['size_in_bytes'])
|
|
||||||
bit_size_display = str(var_info.get('bit_size', '0')) if var_info.get('bit_size', 0) > 0 else ""
|
|
||||||
|
|
||||||
initial_value_display = str(var_info.get('initial_value', '')).replace("|", "\\|").replace("\n", " ")
|
|
||||||
comment_display = str(var_info.get('comment', '')).replace("|", "\\|").replace("\n", " ")
|
|
||||||
|
|
||||||
row = f"| `{name_display}` | `{data_type_display}` | {offset_display} | {size_bytes_display} | {bit_size_display} | `{initial_value_display}` |"
|
|
||||||
|
|
||||||
if include_current_value:
|
if include_current_value:
|
||||||
current_value_display = ""
|
current_value = str(var.get('current_value', '')).replace("|", "\\|").replace("\n", " ")
|
||||||
# Si es un array y tiene current_element_values
|
current_value_display = f"`{current_value}`" if current_value else ""
|
||||||
if var_info.get("current_element_values") and isinstance(var_info["current_element_values"], dict):
|
row = f"| {name_display} | {data_type_display} | {address} | {size_bytes_display} | {bit_size_display} | {initial_value_display} | {current_value_display} | {comment_display} |"
|
||||||
# Mostrar un resumen o un placeholder para arrays complejos en la tabla principal
|
else:
|
||||||
# Los valores detallados del array se listarán en la sección BEGIN.
|
row = f"| {name_display} | {data_type_display} | {address} | {size_bytes_display} | {bit_size_display} | {initial_value_display} | {comment_display} |"
|
||||||
num_elements = sum(dim['count'] for dim in var_info.get('array_dimensions', [])) if var_info.get('array_dimensions') else 1
|
|
||||||
if num_elements == 0 and var_info.get("array_dimensions"): # Caso de ARRAY [x..y] donde x > y (raro, pero posible)
|
lines.append(row)
|
||||||
num_elements = 1 # Para evitar división por cero o lógica extraña.
|
|
||||||
|
return lines
|
||||||
assigned_elements = len(var_info["current_element_values"])
|
|
||||||
if assigned_elements > 0:
|
|
||||||
current_value_display = f"{assigned_elements} elemento(s) asignado(s) en BEGIN"
|
|
||||||
elif var_info.get("current_value") is not None: # Para arrays con una asignación global (raro en BEGIN)
|
|
||||||
current_value_display = str(var_info.get("current_value", '')).replace("|", "\\|").replace("\n", " ")
|
|
||||||
else:
|
|
||||||
current_value_display = ""
|
|
||||||
|
|
||||||
elif var_info.get("current_value") is not None:
|
def generate_begin_block_documentation(db_info: Dict[str, Any]) -> List[str]:
|
||||||
current_value_display = str(var_info.get("current_value", '')).replace("|", "\\|").replace("\n", " ")
|
"""
|
||||||
row += f" `{current_value_display}` |"
|
Genera documentación para el bloque BEGIN utilizando flatten_db_structure.
|
||||||
|
"""
|
||||||
row += f" {comment_display} |"
|
lines = []
|
||||||
md_lines.append(row)
|
lines.append("#### Contenido del Bloque `BEGIN` (Valores Actuales Asignados):")
|
||||||
|
lines.append("El bloque `BEGIN` define los valores actuales de las variables en el DB. Las siguientes asignaciones son ordenadas por offset:")
|
||||||
# Recursión para hijos de STRUCTs o miembros expandidos de UDTs
|
lines.append("")
|
||||||
# `is_udt_expanded_member` en el JSON nos dice si los 'children' son la expansión de un UDT.
|
|
||||||
if var_info.get("children"):
|
# Usar la función de aplanamiento importada de x3
|
||||||
# El prefijo para los hijos es el nombre completo del padre actual.
|
flat_vars = flatten_db_structure(db_info)
|
||||||
# Si el hijo es un miembro expandido de UDT, su propio nombre en 'children' ya es el nombre final del miembro.
|
|
||||||
# Si el hijo es parte de un STRUCT anidado, su nombre es relativo al STRUCT.
|
# Filtrar solo variables con valores actuales
|
||||||
md_lines.extend(generate_members_table_md(
|
vars_with_values = [var for var in flat_vars if var.get("current_value") is not None]
|
||||||
var_info["children"],
|
|
||||||
f"{name_display}.",
|
if vars_with_values:
|
||||||
is_udt_definition,
|
lines.append("```scl")
|
||||||
include_current_value
|
for var in vars_with_values:
|
||||||
))
|
value_str = str(var["current_value"])
|
||||||
|
if value_str.lower() == "true": value_str = "TRUE"
|
||||||
|
elif value_str.lower() == "false": value_str = "FALSE"
|
||||||
|
|
||||||
return md_lines
|
lines.append(f" {var['full_path']} := {value_str};")
|
||||||
|
lines.append("```")
|
||||||
|
lines.append("")
|
||||||
|
else:
|
||||||
|
lines.append("No se encontraron asignaciones de valores actuales.")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
||||||
"""Genera la documentación Markdown completa para el archivo JSON parseado."""
|
"""Genera la documentación Markdown completa para el archivo JSON parseado."""
|
||||||
|
@ -129,9 +125,10 @@ def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
||||||
lines.append(f"- **Tamaño Total**: {udt['total_size_in_bytes']} bytes")
|
lines.append(f"- **Tamaño Total**: {udt['total_size_in_bytes']} bytes")
|
||||||
lines.append("")
|
lines.append("")
|
||||||
lines.append("#### Miembros del UDT:")
|
lines.append("#### Miembros del UDT:")
|
||||||
lines.append("| Nombre Miembro | Tipo de Dato | Offset (Byte.Bit) | Tamaño (Bytes) | Tamaño (Bits) | Valor Inicial | Comentario |")
|
|
||||||
lines.append("|---|---|---|---|---|---|---|")
|
# Usar la función optimizada para generar tabla
|
||||||
lines.extend(generate_members_table_md(udt.get("members", []), is_udt_definition=True, include_current_value=False))
|
udt_member_lines = generate_members_table_md(udt, is_udt_definition=True, include_current_value=False)
|
||||||
|
lines.extend(udt_member_lines)
|
||||||
lines.append("")
|
lines.append("")
|
||||||
else:
|
else:
|
||||||
lines.append("No se encontraron UDTs en el archivo JSON.")
|
lines.append("No se encontraron UDTs en el archivo JSON.")
|
||||||
|
@ -150,28 +147,14 @@ def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
lines.append("#### Miembros del DB (Sección de Declaración):")
|
lines.append("#### Miembros del DB (Sección de Declaración):")
|
||||||
lines.append("| Nombre Miembro (Ruta) | Tipo de Dato | Offset (Byte.Bit) | Tamaño (Bytes) | Tamaño (Bits) | Valor Inicial (Decl.) | Valor Actual (Efectivo) | Comentario |")
|
db_member_lines = generate_members_table_md(db, include_current_value=True)
|
||||||
lines.append("|---|---|---|---|---|---|---|---|")
|
lines.extend(db_member_lines)
|
||||||
lines.extend(generate_members_table_md(db.get("members", []), include_current_value=True))
|
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
# Sección BEGIN
|
# Generar sección BEGIN usando la función optimizada
|
||||||
ordered_assignments = db.get("_begin_block_assignments_ordered")
|
begin_lines = generate_begin_block_documentation(db)
|
||||||
if ordered_assignments:
|
lines.extend(begin_lines)
|
||||||
lines.append("#### Contenido del Bloque `BEGIN` (Valores Actuales Asignados):")
|
|
||||||
lines.append("El bloque `BEGIN` define los valores actuales de las variables en el DB. Las siguientes asignaciones fueron encontradas, en orden:")
|
|
||||||
lines.append("")
|
|
||||||
lines.append("```scl") # Usar SCL para syntax highlighting si el visualizador Markdown lo soporta
|
|
||||||
for path, value in ordered_assignments:
|
|
||||||
val_str = str(value)
|
|
||||||
if val_str.lower() == "true": val_str = "TRUE"
|
|
||||||
elif val_str.lower() == "false": val_str = "FALSE"
|
|
||||||
lines.append(f" {path} := {val_str};")
|
|
||||||
lines.append("```")
|
|
||||||
lines.append("")
|
|
||||||
else:
|
|
||||||
lines.append("No se encontraron asignaciones en el bloque `BEGIN` (o no fue parseado).")
|
|
||||||
lines.append("")
|
|
||||||
else:
|
else:
|
||||||
lines.append("No se encontraron DBs en el archivo JSON.")
|
lines.append("No se encontraron DBs en el archivo JSON.")
|
||||||
|
|
||||||
|
@ -184,7 +167,6 @@ def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al escribir el archivo Markdown de documentación {output_filename}: {e}")
|
print(f"Error al escribir el archivo Markdown de documentación {output_filename}: {e}")
|
||||||
|
|
||||||
# --- Main ---
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
working_dir = find_working_directory()
|
working_dir = find_working_directory()
|
||||||
print(f"Using working directory: {working_dir}")
|
print(f"Using working directory: {working_dir}")
|
||||||
|
@ -199,7 +181,6 @@ if __name__ == "__main__":
|
||||||
if not json_files_to_process:
|
if not json_files_to_process:
|
||||||
print(f"No se encontraron archivos .json en {input_json_dir}")
|
print(f"No se encontraron archivos .json en {input_json_dir}")
|
||||||
else:
|
else:
|
||||||
|
|
||||||
print(f"Archivos JSON encontrados para procesar: {len(json_files_to_process)}")
|
print(f"Archivos JSON encontrados para procesar: {len(json_files_to_process)}")
|
||||||
|
|
||||||
for json_input_filepath in json_files_to_process:
|
for json_input_filepath in json_files_to_process:
|
||||||
|
@ -213,12 +194,6 @@ if __name__ == "__main__":
|
||||||
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
||||||
data_from_json = json.load(f)
|
data_from_json = json.load(f)
|
||||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||||
except FileNotFoundError:
|
|
||||||
print(f"Error: No se encontró el archivo JSON de entrada: {json_input_filepath}")
|
|
||||||
continue
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
print(f"Error: El archivo JSON de entrada no es válido: {json_input_filepath}")
|
|
||||||
continue
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al leer el archivo JSON {json_input_filepath}: {e}")
|
print(f"Error al leer el archivo JSON {json_input_filepath}: {e}")
|
||||||
continue
|
continue
|
||||||
|
@ -228,4 +203,4 @@ if __name__ == "__main__":
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al generar la documentación para {current_json_filename}: {e}")
|
print(f"Error al generar la documentación para {current_json_filename}: {e}")
|
||||||
|
|
||||||
print("\n--- Proceso de generación de descripciones Markdown completado ---")
|
print("\n--- Proceso de generación de descripciones Markdown completado ---")
|
|
@ -1,11 +1,11 @@
|
||||||
# --- x6.py ---
|
# --- x6_refactored.py ---
|
||||||
import json
|
import json
|
||||||
from typing import List, Dict, Any
|
from typing import List, Dict, Any
|
||||||
import openpyxl # For Excel export
|
import openpyxl
|
||||||
from openpyxl.utils import get_column_letter
|
from openpyxl.utils import get_column_letter
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import glob # Para buscar archivos JSON
|
import glob
|
||||||
|
|
||||||
script_root = os.path.dirname(
|
script_root = os.path.dirname(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||||
|
@ -13,6 +13,10 @@ script_root = os.path.dirname(
|
||||||
sys.path.append(script_root)
|
sys.path.append(script_root)
|
||||||
from backend.script_utils import load_configuration
|
from backend.script_utils import load_configuration
|
||||||
|
|
||||||
|
# Importar funciones comunes desde x3
|
||||||
|
from x3 import flatten_db_structure, format_address_for_display
|
||||||
|
from x4 import format_data_type_for_source
|
||||||
|
|
||||||
def find_working_directory():
|
def find_working_directory():
|
||||||
configs = load_configuration()
|
configs = load_configuration()
|
||||||
working_directory = configs.get("working_directory")
|
working_directory = configs.get("working_directory")
|
||||||
|
@ -21,90 +25,90 @@ def find_working_directory():
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
return working_directory
|
return working_directory
|
||||||
|
|
||||||
# format_data_type_for_source (copied from x4.py as it's needed)
|
|
||||||
def format_data_type_for_source(var_info: Dict[str, Any]) -> str:
|
|
||||||
base_type = var_info.get("udt_source_name") if var_info.get("udt_source_name") else var_info["data_type"]
|
|
||||||
type_str = ""
|
|
||||||
if var_info.get("array_dimensions"):
|
|
||||||
dims_str = ",".join([f"{d['lower_bound']}..{d['upper_bound']}" for d in var_info["array_dimensions"]])
|
|
||||||
type_str += f"ARRAY [{dims_str}] OF "
|
|
||||||
type_str += base_type
|
|
||||||
if var_info["data_type"].upper() == "STRING" and var_info.get("string_length") is not None:
|
|
||||||
type_str += f"[{var_info['string_length']}]"
|
|
||||||
return type_str
|
|
||||||
|
|
||||||
def generate_excel_table(db_info: Dict[str, Any], excel_filename: str):
|
def generate_excel_table(db_info: Dict[str, Any], excel_filename: str):
|
||||||
"""
|
"""
|
||||||
Generates an Excel file with DB documentation.
|
Genera un archivo Excel con documentación del DB usando flatten_db_structure.
|
||||||
"""
|
"""
|
||||||
workbook = openpyxl.Workbook()
|
workbook = openpyxl.Workbook()
|
||||||
sheet = workbook.active
|
sheet = workbook.active
|
||||||
|
|
||||||
db_name_safe = db_info['name'].replace('"', '').replace(' ', '_').replace('/','_')
|
db_name_safe = db_info['name'].replace('"', '').replace(' ', '_').replace('/','_')
|
||||||
sheet.title = f"DB_{db_name_safe}"[:31] # Sheet names have a length limit
|
sheet.title = f"DB_{db_name_safe}"[:31] # Sheet names tienen límite de longitud
|
||||||
|
|
||||||
headers = ["Address", "Name", "Type", "Initial Value", "Actual Value", "Comment"]
|
# Definir encabezados
|
||||||
|
headers = ["Address", "Name", "Type", "Size (Bytes)", "Bit Size", "Initial Value", "Actual Value", "Comment"]
|
||||||
for col_num, header in enumerate(headers, 1):
|
for col_num, header in enumerate(headers, 1):
|
||||||
cell = sheet.cell(row=1, column=col_num, value=header)
|
cell = sheet.cell(row=1, column=col_num, value=header)
|
||||||
cell.font = openpyxl.styles.Font(bold=True)
|
cell.font = openpyxl.styles.Font(bold=True)
|
||||||
|
|
||||||
current_row = 2
|
# Usar flatten_db_structure importado de x3
|
||||||
processed_expanded_members = set() # To handle expanded UDT members correctly
|
flat_vars = flatten_db_structure(db_info)
|
||||||
|
|
||||||
|
# Poblar filas con los datos
|
||||||
|
for row_num, var in enumerate(flat_vars, 2):
|
||||||
|
# Columna 1: Address
|
||||||
|
address = var.get("address_display", format_address_for_display(var["byte_offset"], var.get("bit_size", 0)))
|
||||||
|
sheet.cell(row=row_num, column=1, value=address)
|
||||||
|
|
||||||
|
# Columna 2: Name
|
||||||
|
sheet.cell(row=row_num, column=2, value=var["full_path"])
|
||||||
|
|
||||||
|
# Columna 3: Type
|
||||||
|
data_type = format_data_type_for_source(var)
|
||||||
|
sheet.cell(row=row_num, column=3, value=data_type)
|
||||||
|
|
||||||
|
# Columna 4: Size (Bytes)
|
||||||
|
sheet.cell(row=row_num, column=4, value=var.get("size_in_bytes", 0))
|
||||||
|
|
||||||
|
# Columna 5: Bit Size
|
||||||
|
sheet.cell(row=row_num, column=5, value=var.get("bit_size", 0) if var.get("bit_size", 0) > 0 else None)
|
||||||
|
|
||||||
|
# Columna 6: Initial Value
|
||||||
|
sheet.cell(row=row_num, column=6, value=var.get("initial_value", ""))
|
||||||
|
|
||||||
|
# Columna 7: Actual Value
|
||||||
|
sheet.cell(row=row_num, column=7, value=var.get("current_value", ""))
|
||||||
|
|
||||||
|
# Columna 8: Comment
|
||||||
|
sheet.cell(row=row_num, column=8, value=var.get("comment", ""))
|
||||||
|
|
||||||
def flatten_members_for_excel(members: List[Dict[str, Any]], prefix: str = "", base_offset: float = 0.0, is_expansion: bool = False):
|
# Crear una segunda hoja para el bloque BEGIN
|
||||||
nonlocal current_row
|
begin_sheet = workbook.create_sheet(title="BEGIN_Values")
|
||||||
for var_idx, var in enumerate(members):
|
begin_headers = ["Address", "Path", "Value"]
|
||||||
member_id = f"{prefix}{var['name']}_{var_idx}" # Unique ID for processed check
|
for col_num, header in enumerate(begin_headers, 1):
|
||||||
if is_expansion and member_id in processed_expanded_members:
|
cell = begin_sheet.cell(row=1, column=col_num, value=header)
|
||||||
continue
|
cell.font = openpyxl.styles.Font(bold=True)
|
||||||
if is_expansion:
|
|
||||||
processed_expanded_members.add(member_id)
|
# Filtrar solo variables con valores actuales para la hoja BEGIN
|
||||||
|
vars_with_values = [var for var in flat_vars if var.get("current_value") is not None]
|
||||||
name_for_display = f"{prefix}{var['name']}"
|
|
||||||
|
# Poblar la hoja BEGIN
|
||||||
address = f"{var['byte_offset']:.1f}" if isinstance(var['byte_offset'], float) else str(var['byte_offset'])
|
for row_num, var in enumerate(vars_with_values, 2):
|
||||||
# Adjust address formatting for bits as in markdown generation
|
# Columna 1: Address
|
||||||
if var.get("bit_size", 0) > 0 and isinstance(var['byte_offset'], float) and var['byte_offset'] != int(var['byte_offset']):
|
begin_sheet.cell(row=row_num, column=1, value=var.get("address_display"))
|
||||||
pass # Already formatted like X.Y
|
|
||||||
elif var.get("bit_size", 0) > 0 :
|
# Columna 2: Path
|
||||||
address = f"{int(var['byte_offset'])}.0" # Ensure X.0 for bits at the start of a byte
|
begin_sheet.cell(row=row_num, column=2, value=var["full_path"])
|
||||||
|
|
||||||
data_type_str = format_data_type_for_source(var)
|
# Columna 3: Value
|
||||||
initial_value = str(var.get("initial_value", ""))
|
value_str = str(var["current_value"])
|
||||||
actual_value = str(var.get("current_value", ""))
|
if value_str.lower() == "true": value_str = "TRUE"
|
||||||
comment = str(var.get("comment", ""))
|
elif value_str.lower() == "false": value_str = "FALSE"
|
||||||
|
begin_sheet.cell(row=row_num, column=3, value=value_str)
|
||||||
is_struct_container = var["data_type"].upper() == "STRUCT" and \
|
|
||||||
not var.get("udt_source_name") and \
|
# Auto-ajustar columnas para mejor legibilidad
|
||||||
var.get("children")
|
for sheet in workbook.worksheets:
|
||||||
is_udt_instance_container = bool(var.get("udt_source_name")) and var.get("children")
|
for col_idx, column_cells in enumerate(sheet.columns, 1):
|
||||||
|
max_length = 0
|
||||||
if not is_struct_container and not is_udt_instance_container or var.get("is_udt_expanded_member"):
|
column = get_column_letter(col_idx)
|
||||||
row_data = [address, name_for_display, data_type_str, initial_value, actual_value, comment]
|
for cell in column_cells:
|
||||||
for col_num, value in enumerate(row_data, 1):
|
try:
|
||||||
sheet.cell(row=current_row, column=col_num, value=value)
|
if len(str(cell.value)) > max_length:
|
||||||
current_row += 1
|
max_length = len(str(cell.value))
|
||||||
|
except:
|
||||||
if var.get("children"):
|
pass
|
||||||
flatten_members_for_excel(var["children"],
|
adjusted_width = min(max_length + 2, 100) # Limitar a 100 para anchos extremos
|
||||||
f"{name_for_display}.",
|
sheet.column_dimensions[column].width = adjusted_width
|
||||||
var['byte_offset'], # Pass the parent's offset
|
|
||||||
is_expansion=bool(var.get("udt_source_name"))) # Mark if we are expanding a UDT
|
|
||||||
|
|
||||||
flatten_members_for_excel(db_info.get("members", []))
|
|
||||||
|
|
||||||
# Auto-size columns for better readability
|
|
||||||
for col_idx, column_cells in enumerate(sheet.columns, 1):
|
|
||||||
max_length = 0
|
|
||||||
column = get_column_letter(col_idx)
|
|
||||||
for cell in column_cells:
|
|
||||||
try:
|
|
||||||
if len(str(cell.value)) > max_length:
|
|
||||||
max_length = len(str(cell.value))
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
adjusted_width = (max_length + 2)
|
|
||||||
sheet.column_dimensions[column].width = adjusted_width
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
workbook.save(excel_filename)
|
workbook.save(excel_filename)
|
||||||
|
@ -137,20 +141,13 @@ def main():
|
||||||
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
||||||
data_from_json = json.load(f)
|
data_from_json = json.load(f)
|
||||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||||
except FileNotFoundError:
|
|
||||||
print(f"Error: El archivo JSON de entrada '{current_json_filename}' no fue encontrado en {json_input_filepath}.")
|
|
||||||
continue
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
print(f"Error: El archivo JSON '{current_json_filename}' no tiene un formato JSON válido.")
|
|
||||||
continue
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if data_from_json.get("dbs"):
|
if data_from_json.get("dbs"):
|
||||||
for db_to_document in data_from_json["dbs"]:
|
for db_to_document in data_from_json["dbs"]:
|
||||||
# Construir el path completo para el archivo Excel de salida
|
excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}_{db_to_document['name'].replace('"', '')}.xlsx")
|
||||||
excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}.xlsx")
|
|
||||||
|
|
||||||
print(f"Generando documentación Excel para DB: '{db_to_document['name']}' (desde {current_json_filename}) -> {excel_output_filename}")
|
print(f"Generando documentación Excel para DB: '{db_to_document['name']}' (desde {current_json_filename}) -> {excel_output_filename}")
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -1,37 +1,39 @@
|
||||||
# --- x7.py ---
|
# --- x7_refactored.py ---
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import glob
|
import glob
|
||||||
import sys
|
import sys
|
||||||
import copy
|
import copy
|
||||||
|
import shutil # Para copiar archivos
|
||||||
from typing import Dict, List, Tuple, Any, Optional
|
from typing import Dict, List, Tuple, Any, Optional
|
||||||
|
|
||||||
# Importar load_configuration desde backend.script_utils
|
# Importar para el path
|
||||||
script_root = os.path.dirname(
|
script_root = os.path.dirname(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||||
)
|
)
|
||||||
sys.path.append(script_root)
|
sys.path.append(script_root)
|
||||||
from backend.script_utils import load_configuration
|
from backend.script_utils import load_configuration
|
||||||
|
|
||||||
# Importar lo necesario desde x3.py
|
# Importar desde x3
|
||||||
sys.path.append(os.path.dirname(__file__))
|
from x3 import S7Parser, find_working_directory, custom_json_serializer, flatten_db_structure, format_address_for_display
|
||||||
from x3 import S7Parser, find_working_directory, custom_json_serializer, ParsedData
|
from x4 import format_data_type_for_source
|
||||||
|
|
||||||
|
# Importar desde x4 para generar archivos
|
||||||
|
from x4 import generate_s7_source_code_lines, generate_markdown_table
|
||||||
|
|
||||||
def find_matching_files(working_dir: str) -> List[Tuple[str, str]]:
|
def find_matching_files(working_dir: str) -> List[Tuple[str, str]]:
|
||||||
"""
|
"""
|
||||||
Busca pares de archivos _data y _format con extensión .db o .awl.
|
Busca pares de archivos _data y _format con extensión .db o .awl.
|
||||||
"""
|
"""
|
||||||
# Buscar archivos _data
|
# [Código existente]
|
||||||
data_files_db = glob.glob(os.path.join(working_dir, "*_data.db"))
|
data_files_db = glob.glob(os.path.join(working_dir, "*_data.db"))
|
||||||
data_files_awl = glob.glob(os.path.join(working_dir, "*_data.awl"))
|
data_files_awl = glob.glob(os.path.join(working_dir, "*_data.awl"))
|
||||||
all_data_files = data_files_db + data_files_awl
|
all_data_files = data_files_db + data_files_awl
|
||||||
|
|
||||||
# Buscar archivos _format
|
|
||||||
format_files_db = glob.glob(os.path.join(working_dir, "*_format.db"))
|
format_files_db = glob.glob(os.path.join(working_dir, "*_format.db"))
|
||||||
format_files_awl = glob.glob(os.path.join(working_dir, "*_format.awl"))
|
format_files_awl = glob.glob(os.path.join(working_dir, "*_format.awl"))
|
||||||
all_format_files = format_files_db + format_files_awl
|
all_format_files = format_files_db + format_files_awl
|
||||||
|
|
||||||
# Emparejar archivos _data y _format
|
|
||||||
matched_pairs = []
|
matched_pairs = []
|
||||||
for data_file in all_data_files:
|
for data_file in all_data_files:
|
||||||
base_name = os.path.basename(data_file).replace("_data", "").split('.')[0]
|
base_name = os.path.basename(data_file).replace("_data", "").split('.')[0]
|
||||||
|
@ -41,11 +43,12 @@ def find_matching_files(working_dir: str) -> List[Tuple[str, str]]:
|
||||||
|
|
||||||
return matched_pairs
|
return matched_pairs
|
||||||
|
|
||||||
|
# [Otras funciones existentes: parse_files_to_json, compare_structures_by_offset, update_values_recursive, create_updated_json]
|
||||||
|
|
||||||
def parse_files_to_json(data_file: str, format_file: str, json_dir: str) -> Tuple[Dict, Dict]:
|
def parse_files_to_json(data_file: str, format_file: str, json_dir: str) -> Tuple[Dict, Dict]:
|
||||||
"""
|
"""
|
||||||
Parsea los archivos _data y _format usando S7Parser y guarda los resultados como JSON.
|
Parsea los archivos _data y _format usando S7Parser y guarda los resultados como JSON.
|
||||||
"""
|
"""
|
||||||
# Instancias separadas del parser para cada archivo
|
|
||||||
data_parser = S7Parser()
|
data_parser = S7Parser()
|
||||||
format_parser = S7Parser()
|
format_parser = S7Parser()
|
||||||
|
|
||||||
|
@ -55,14 +58,12 @@ def parse_files_to_json(data_file: str, format_file: str, json_dir: str) -> Tupl
|
||||||
print(f"Parseando archivo format: {os.path.basename(format_file)}")
|
print(f"Parseando archivo format: {os.path.basename(format_file)}")
|
||||||
format_result = format_parser.parse_file(format_file)
|
format_result = format_parser.parse_file(format_file)
|
||||||
|
|
||||||
# Guardar resultados como JSON
|
|
||||||
data_base = os.path.splitext(os.path.basename(data_file))[0]
|
data_base = os.path.splitext(os.path.basename(data_file))[0]
|
||||||
format_base = os.path.splitext(os.path.basename(format_file))[0]
|
format_base = os.path.splitext(os.path.basename(format_file))[0]
|
||||||
|
|
||||||
data_json_path = os.path.join(json_dir, f"{data_base}.json")
|
data_json_path = os.path.join(json_dir, f"{data_base}.json")
|
||||||
format_json_path = os.path.join(json_dir, f"{format_base}.json")
|
format_json_path = os.path.join(json_dir, f"{format_base}.json")
|
||||||
|
|
||||||
# Serializar y guardar como JSON
|
|
||||||
data_json = json.dumps(data_result, default=custom_json_serializer, indent=2)
|
data_json = json.dumps(data_result, default=custom_json_serializer, indent=2)
|
||||||
format_json = json.dumps(format_result, default=custom_json_serializer, indent=2)
|
format_json = json.dumps(format_result, default=custom_json_serializer, indent=2)
|
||||||
|
|
||||||
|
@ -74,273 +75,514 @@ def parse_files_to_json(data_file: str, format_file: str, json_dir: str) -> Tupl
|
||||||
|
|
||||||
print(f"Archivos JSON generados: {os.path.basename(data_json_path)} y {os.path.basename(format_json_path)}")
|
print(f"Archivos JSON generados: {os.path.basename(data_json_path)} y {os.path.basename(format_json_path)}")
|
||||||
|
|
||||||
# Cargar de nuevo como objetos para procesamiento
|
|
||||||
data_obj = json.loads(data_json)
|
data_obj = json.loads(data_json)
|
||||||
format_obj = json.loads(format_json)
|
format_obj = json.loads(format_json)
|
||||||
|
|
||||||
return data_obj, format_obj
|
return data_obj, format_obj
|
||||||
|
|
||||||
def create_offset_path_map(members: List[Dict], path_prefix: str = "") -> Dict[float, str]:
|
def compare_structures_by_offset(data_vars: List[Dict], format_vars: List[Dict]) -> Tuple[bool, List[str]]:
|
||||||
"""
|
|
||||||
Crea un mapa que asocia cada offset con la ruta completa de la variable.
|
|
||||||
Esto será usado para actualizar las asignaciones del bloque BEGIN.
|
|
||||||
"""
|
|
||||||
offset_to_path = {}
|
|
||||||
|
|
||||||
def process_member(member: Dict, current_path_prefix: str):
|
|
||||||
offset = member["byte_offset"]
|
|
||||||
full_path = f"{current_path_prefix}{member['name']}"
|
|
||||||
|
|
||||||
# Mapear offset a ruta completa
|
|
||||||
offset_to_path[offset] = full_path
|
|
||||||
|
|
||||||
# Procesar hijos recursivamente
|
|
||||||
if "children" in member and member["children"]:
|
|
||||||
for child in member["children"]:
|
|
||||||
process_member(child, f"{full_path}.")
|
|
||||||
|
|
||||||
# Procesar todos los miembros
|
|
||||||
for member in members:
|
|
||||||
process_member(member, path_prefix)
|
|
||||||
|
|
||||||
return offset_to_path
|
|
||||||
|
|
||||||
def flatten_variables_by_offset(data: Dict) -> Dict[float, Dict]:
|
|
||||||
"""
|
|
||||||
Aplana completamente todas las variables por offset, similar a flatten_members_for_markdown.
|
|
||||||
Incluye UDTs expandidos, estructuras anidadas, etc.
|
|
||||||
"""
|
|
||||||
offset_map = {}
|
|
||||||
processed_expanded_members = set()
|
|
||||||
|
|
||||||
def process_members(members: List[Dict], prefix: str = "", is_expansion: bool = False):
|
|
||||||
for var_idx, var in enumerate(members):
|
|
||||||
# Control para miembros UDT expandidos (evitar duplicados)
|
|
||||||
member_id = f"{prefix}{var['name']}_{var_idx}"
|
|
||||||
if is_expansion and member_id in processed_expanded_members:
|
|
||||||
continue
|
|
||||||
if is_expansion:
|
|
||||||
processed_expanded_members.add(member_id)
|
|
||||||
|
|
||||||
# Extraer offset e información de la variable
|
|
||||||
offset = var["byte_offset"]
|
|
||||||
var_info = {
|
|
||||||
"path": f"{prefix}{var['name']}",
|
|
||||||
"data_type": var["data_type"],
|
|
||||||
"size_in_bytes": var["size_in_bytes"],
|
|
||||||
"bit_size": var.get("bit_size", 0),
|
|
||||||
"initial_value": var.get("initial_value"),
|
|
||||||
"current_value": var.get("current_value"),
|
|
||||||
"current_element_values": var.get("current_element_values")
|
|
||||||
}
|
|
||||||
|
|
||||||
# Guardar en mapa por offset
|
|
||||||
offset_map[offset] = var_info
|
|
||||||
|
|
||||||
# Procesar recursivamente los hijos
|
|
||||||
if "children" in var and var["children"]:
|
|
||||||
process_members(
|
|
||||||
var["children"],
|
|
||||||
f"{prefix}{var['name']}.",
|
|
||||||
is_expansion=bool(var.get("udt_source_name"))
|
|
||||||
)
|
|
||||||
|
|
||||||
# Procesar todos los DBs
|
|
||||||
for db in data.get("dbs", []):
|
|
||||||
process_members(db.get("members", []))
|
|
||||||
|
|
||||||
return offset_map
|
|
||||||
|
|
||||||
def create_path_to_offset_map(members: List[Dict], path_prefix: str = "") -> Dict[str, float]:
|
|
||||||
"""
|
|
||||||
Crea un mapa que asocia cada ruta (path) completa con su offset.
|
|
||||||
Esto será usado para actualizar las asignaciones del bloque BEGIN.
|
|
||||||
"""
|
|
||||||
path_to_offset = {}
|
|
||||||
processed_expanded_members = set()
|
|
||||||
|
|
||||||
def process_member(member: Dict, current_path_prefix: str, is_expansion: bool = False):
|
|
||||||
member_id = f"{current_path_prefix}{member['name']}"
|
|
||||||
|
|
||||||
# Evitar duplicados para miembros expandidos de UDTs
|
|
||||||
if is_expansion and member_id in processed_expanded_members:
|
|
||||||
return
|
|
||||||
if is_expansion:
|
|
||||||
processed_expanded_members.add(member_id)
|
|
||||||
|
|
||||||
offset = member["byte_offset"]
|
|
||||||
path = f"{current_path_prefix}{member['name']}"
|
|
||||||
|
|
||||||
# Mapear ruta a offset
|
|
||||||
path_to_offset[path] = offset
|
|
||||||
|
|
||||||
# Para arrays, también mapear rutas con índices si hay valores iniciales
|
|
||||||
if member.get("array_dimensions") and member.get("current_element_values"):
|
|
||||||
for index in member["current_element_values"].keys():
|
|
||||||
array_path = f"{path}[{index}]"
|
|
||||||
path_to_offset[array_path] = offset
|
|
||||||
|
|
||||||
# Procesar hijos recursivamente
|
|
||||||
if "children" in member and member["children"]:
|
|
||||||
for child in member["children"]:
|
|
||||||
process_member(
|
|
||||||
child,
|
|
||||||
f"{path}.",
|
|
||||||
is_expansion=bool(member.get("udt_source_name"))
|
|
||||||
)
|
|
||||||
|
|
||||||
# Procesar todos los miembros
|
|
||||||
for member in members:
|
|
||||||
process_member(member, path_prefix)
|
|
||||||
|
|
||||||
return path_to_offset
|
|
||||||
|
|
||||||
def compare_structures_by_offset(data_vars: Dict[float, Dict], format_vars: Dict[float, Dict]) -> Tuple[bool, List[str]]:
|
|
||||||
"""
|
"""
|
||||||
Compara variables por offset, verificando compatibilidad.
|
Compara variables por offset, verificando compatibilidad.
|
||||||
|
Usa las listas aplanadas de flatten_db_structure.
|
||||||
"""
|
"""
|
||||||
issues = []
|
issues = []
|
||||||
|
|
||||||
|
# Crear diccionarios para búsqueda rápida por offset
|
||||||
|
data_by_offset = {var["byte_offset"]: var for var in data_vars}
|
||||||
|
format_by_offset = {var["byte_offset"]: var for var in format_vars}
|
||||||
|
|
||||||
# Recopilar todos los offsets únicos de ambos conjuntos
|
# Recopilar todos los offsets únicos de ambos conjuntos
|
||||||
all_offsets = sorted(set(list(data_vars.keys()) + list(format_vars.keys())))
|
all_offsets = sorted(set(list(data_by_offset.keys()) + list(format_by_offset.keys())))
|
||||||
|
|
||||||
# Verificar que todos los offsets existan en ambos conjuntos
|
# Verificar que todos los offsets existan en ambos conjuntos
|
||||||
for offset in all_offsets:
|
for offset in all_offsets:
|
||||||
if offset not in data_vars:
|
if offset not in data_by_offset:
|
||||||
issues.append(f"Offset {offset} existe en _format pero no en _data")
|
issues.append(f"Offset {offset} existe en _format pero no en _data")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if offset not in format_vars:
|
if offset not in format_by_offset:
|
||||||
issues.append(f"Offset {offset} existe en _data pero no en _format")
|
issues.append(f"Offset {offset} existe en _data pero no en _format")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
# Obtener las variables para comparar
|
||||||
|
data_var = data_by_offset[offset]
|
||||||
|
format_var = format_by_offset[offset]
|
||||||
|
|
||||||
# Verificar coincidencia de tipos
|
# Verificar coincidencia de tipos
|
||||||
data_type = data_vars[offset]["data_type"].upper()
|
data_type = data_var["data_type"].upper()
|
||||||
format_type = format_vars[offset]["data_type"].upper()
|
format_type = format_var["data_type"].upper()
|
||||||
|
|
||||||
if data_type != format_type:
|
if data_type != format_type:
|
||||||
issues.append(f"Tipo de dato diferente en offset {offset}: {data_type} ({data_vars[offset]['path']}) vs {format_type} ({format_vars[offset]['path']})")
|
issues.append(f"Tipo de dato diferente en offset {offset}: {data_type} ({data_var['full_path']}) vs {format_type} ({format_var['full_path']})")
|
||||||
|
|
||||||
# Verificar tamaño
|
# Verificar tamaño
|
||||||
data_size = data_vars[offset]["size_in_bytes"]
|
data_size = data_var["size_in_bytes"]
|
||||||
format_size = format_vars[offset]["size_in_bytes"]
|
format_size = format_var["size_in_bytes"]
|
||||||
|
|
||||||
if data_size != format_size:
|
if data_size != format_size:
|
||||||
issues.append(f"Tamaño diferente en offset {offset}: {data_size} bytes ({data_vars[offset]['path']}) vs {format_size} bytes ({format_vars[offset]['path']})")
|
issues.append(f"Tamaño diferente en offset {offset}: {data_size} bytes ({data_var['full_path']}) vs {format_size} bytes ({format_var['full_path']})")
|
||||||
|
|
||||||
# Verificar tamaño en bits para BOOLs
|
# Verificar tamaño en bits para BOOLs
|
||||||
data_bit_size = data_vars[offset]["bit_size"]
|
data_bit_size = data_var.get("bit_size", 0)
|
||||||
format_bit_size = format_vars[offset]["bit_size"]
|
format_bit_size = format_var.get("bit_size", 0)
|
||||||
|
|
||||||
if data_bit_size != format_bit_size:
|
if data_bit_size != format_bit_size:
|
||||||
issues.append(f"Tamaño en bits diferente en offset {offset}: {data_bit_size} ({data_vars[offset]['path']}) vs {format_bit_size} ({format_vars[offset]['path']})")
|
issues.append(f"Tamaño en bits diferente en offset {offset}: {data_bit_size} ({data_var['full_path']}) vs {format_bit_size} ({format_var['full_path']})")
|
||||||
|
|
||||||
return len(issues) == 0, issues
|
return len(issues) == 0, issues
|
||||||
|
|
||||||
def update_values_recursive(target_member: Dict, data_offset_map: Dict[float, Dict]):
|
|
||||||
"""
|
|
||||||
Actualiza los valores de target_member con valores de data_offset_map basado en offset.
|
|
||||||
"""
|
|
||||||
offset = target_member["byte_offset"]
|
|
||||||
|
|
||||||
# Si encontramos una variable con el mismo offset en _data, tomar sus valores
|
|
||||||
if offset in data_offset_map:
|
|
||||||
data_var = data_offset_map[offset]
|
|
||||||
|
|
||||||
# Actualizar initial_value
|
|
||||||
if "initial_value" in data_var and data_var["initial_value"] is not None:
|
|
||||||
target_member["initial_value"] = data_var["initial_value"]
|
|
||||||
|
|
||||||
# Actualizar current_value
|
|
||||||
if "current_value" in data_var and data_var["current_value"] is not None:
|
|
||||||
target_member["current_value"] = data_var["current_value"]
|
|
||||||
|
|
||||||
# Actualizar current_element_values (para arrays)
|
|
||||||
if "current_element_values" in data_var and data_var["current_element_values"]:
|
|
||||||
target_member["current_element_values"] = data_var["current_element_values"]
|
|
||||||
|
|
||||||
# Actualizar recursivamente los hijos
|
|
||||||
if "children" in target_member and target_member["children"]:
|
|
||||||
for child in target_member["children"]:
|
|
||||||
update_values_recursive(child, data_offset_map)
|
|
||||||
|
|
||||||
def create_updated_json(data_json: Dict, format_json: Dict) -> Dict:
|
def create_updated_json(data_json: Dict, format_json: Dict) -> Dict:
|
||||||
"""
|
"""
|
||||||
Crea JSON actualizado basado en la estructura de _format con valores de _data.
|
Crea JSON actualizado basado en la estructura de _format con valores de _data.
|
||||||
|
Utiliza offset como clave principal para encontrar variables correspondientes.
|
||||||
|
Reporta errores si no se encuentra un offset correspondiente.
|
||||||
"""
|
"""
|
||||||
# Copia profunda de format_json para no modificar el original
|
# Copia profunda de format_json para no modificar el original
|
||||||
updated_json = copy.deepcopy(format_json)
|
updated_json = copy.deepcopy(format_json)
|
||||||
|
|
||||||
# Extraer todas las variables flat por offset
|
# Procesar cada DB
|
||||||
data_offset_map = flatten_variables_by_offset(data_json)
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
|
# Buscar el DB correspondiente en data_json
|
||||||
# Crear mapas de offsets y rutas para cada BD
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == format_db["name"]), None)
|
||||||
db_maps = {}
|
if not data_db:
|
||||||
for db_idx, db in enumerate(format_json.get("dbs", [])):
|
print(f"Error: No se encontró DB '{format_db['name']}' en data_json")
|
||||||
db_name = db["name"]
|
continue # No hay DB correspondiente en data_json
|
||||||
db_maps[db_name] = {
|
|
||||||
"offset_to_path": create_offset_path_map(db.get("members", [])),
|
|
||||||
"path_to_offset": create_path_to_offset_map(db.get("members", []))
|
|
||||||
}
|
|
||||||
|
|
||||||
# Actualizar valores de variables en la estructura de formato
|
|
||||||
for db_idx, db in enumerate(updated_json.get("dbs", [])):
|
|
||||||
for member in db.get("members", []):
|
|
||||||
update_values_recursive(member, data_offset_map)
|
|
||||||
|
|
||||||
# Actualizar también asignaciones del bloque BEGIN
|
# Aplanar variables de ambos DBs
|
||||||
db_name = db["name"]
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
data_db = next((d for d in data_json.get("dbs", []) if d["name"] == db_name), None)
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
|
|
||||||
if data_db and "_begin_block_assignments_ordered" in data_db:
|
# Crear mapa de offset a variable para data
|
||||||
# Obtener los mapas para este DB
|
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars}
|
||||||
offset_to_path = db_maps[db_name]["offset_to_path"]
|
|
||||||
|
# Para cada variable en format, buscar su correspondiente en data por offset
|
||||||
|
for format_var in flat_format_vars:
|
||||||
|
offset = format_var["byte_offset"]
|
||||||
|
path = format_var["full_path"]
|
||||||
|
|
||||||
# Crear una nueva lista de asignaciones con las rutas correctas
|
# Buscar la variable correspondiente en data_json por offset
|
||||||
updated_assignments = []
|
if offset in data_by_offset:
|
||||||
|
data_var = data_by_offset[offset]
|
||||||
# Para cada asignación en los datos de origen
|
|
||||||
for path, value in data_db["_begin_block_assignments_ordered"]:
|
|
||||||
# Búsqueda por offset si es posible
|
|
||||||
data_db_path_to_offset = create_path_to_offset_map(data_db.get("members", []))
|
|
||||||
|
|
||||||
if path in data_db_path_to_offset:
|
# Encontrar la variable original en la estructura jerárquica
|
||||||
# Obtener el offset de la ruta original
|
path_parts = format_var["full_path"].split('.')
|
||||||
offset = data_db_path_to_offset[path]
|
current_node = updated_json["dbs"][db_idx]
|
||||||
|
|
||||||
# Buscar la ruta correspondiente en el formato usando el offset
|
# Variable para rastrear si se encontró la ruta
|
||||||
if offset in offset_to_path:
|
path_found = True
|
||||||
new_path = offset_to_path[offset]
|
|
||||||
updated_assignments.append([new_path, value])
|
# Navegar la jerarquía hasta encontrar el nodo padre
|
||||||
print(f"Mapeando {path} -> {new_path} (offset {offset})")
|
for i in range(len(path_parts) - 1):
|
||||||
|
if "members" in current_node:
|
||||||
|
# Buscar el miembro correspondiente
|
||||||
|
member_name = path_parts[i]
|
||||||
|
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
||||||
|
if matching_members:
|
||||||
|
current_node = matching_members[0]
|
||||||
|
else:
|
||||||
|
print(f"Error: No se encontró el miembro '{member_name}' en la ruta '{path}'")
|
||||||
|
path_found = False
|
||||||
|
break # No se encontró la ruta
|
||||||
|
elif "children" in current_node:
|
||||||
|
# Buscar el hijo correspondiente
|
||||||
|
child_name = path_parts[i]
|
||||||
|
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
||||||
|
if matching_children:
|
||||||
|
current_node = matching_children[0]
|
||||||
|
else:
|
||||||
|
print(f"Error: No se encontró el hijo '{child_name}' en la ruta '{path}'")
|
||||||
|
path_found = False
|
||||||
|
break # No se encontró la ruta
|
||||||
else:
|
else:
|
||||||
print(f"Advertencia: No se encontró un mapeo para el offset {offset} ({path})")
|
print(f"Error: No se puede navegar más en la ruta '{path}', nodo actual no tiene members ni children")
|
||||||
else:
|
path_found = False
|
||||||
print(f"Advertencia: No se pudo determinar el offset para la ruta {path}")
|
break # No se puede navegar más
|
||||||
|
|
||||||
# Actualizar asignaciones en el JSON actualizado
|
# Si encontramos el nodo padre, actualizar el hijo
|
||||||
db["_begin_block_assignments_ordered"] = updated_assignments
|
if path_found and ("members" in current_node or "children" in current_node):
|
||||||
|
target_list = current_node.get("members", current_node.get("children", []))
|
||||||
# También actualizar el diccionario _initial_values_from_begin_block
|
target_name = path_parts[-1]
|
||||||
if "_initial_values_from_begin_block" in data_db:
|
|
||||||
updated_values = {}
|
# Si es un elemento de array, extraer el nombre base y el índice
|
||||||
for path, value in updated_assignments:
|
if '[' in target_name and ']' in target_name:
|
||||||
updated_values[path] = value
|
base_name = target_name.split('[')[0]
|
||||||
db["_initial_values_from_begin_block"] = updated_values
|
index_str = target_name[target_name.find('[')+1:target_name.find(']')]
|
||||||
|
|
||||||
|
# Buscar el array base
|
||||||
|
array_var = next((var for var in target_list if var["name"] == base_name), None)
|
||||||
|
if array_var:
|
||||||
|
# Asegurarse que existe current_element_values
|
||||||
|
if "current_element_values" not in array_var:
|
||||||
|
array_var["current_element_values"] = {}
|
||||||
|
|
||||||
|
# Copiar el valor del elemento del array
|
||||||
|
if "current_value" in data_var:
|
||||||
|
array_var["current_element_values"][index_str] = {
|
||||||
|
"value": data_var["current_value"],
|
||||||
|
"offset": data_var["byte_offset"]
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Buscar la variable a actualizar
|
||||||
|
target_var_found = False
|
||||||
|
for target_var in target_list:
|
||||||
|
if target_var["name"] == target_name:
|
||||||
|
target_var_found = True
|
||||||
|
|
||||||
|
# Limpiar y copiar initial_value si existe
|
||||||
|
if "initial_value" in target_var:
|
||||||
|
del target_var["initial_value"]
|
||||||
|
if "initial_value" in data_var and data_var["initial_value"] is not None:
|
||||||
|
target_var["initial_value"] = data_var["initial_value"]
|
||||||
|
|
||||||
|
# Limpiar y copiar current_value si existe
|
||||||
|
if "current_value" in target_var:
|
||||||
|
del target_var["current_value"]
|
||||||
|
if "current_value" in data_var and data_var["current_value"] is not None:
|
||||||
|
target_var["current_value"] = data_var["current_value"]
|
||||||
|
|
||||||
|
# Limpiar y copiar current_element_values si existe
|
||||||
|
if "current_element_values" in target_var:
|
||||||
|
del target_var["current_element_values"]
|
||||||
|
if "current_element_values" in data_var and data_var["current_element_values"]:
|
||||||
|
target_var["current_element_values"] = copy.deepcopy(data_var["current_element_values"])
|
||||||
|
|
||||||
|
break
|
||||||
|
|
||||||
|
if not target_var_found and not ('[' in target_name and ']' in target_name):
|
||||||
|
print(f"Error: No se encontró la variable '{target_name}' en la ruta '{path}'")
|
||||||
|
else:
|
||||||
|
# El offset no existe en data_json, reportar error
|
||||||
|
print(f"Error: Offset {offset} (para '{path}') no encontrado en los datos source (_data)")
|
||||||
|
|
||||||
|
# Eliminar valores si es una variable que no es elemento de array
|
||||||
|
if '[' not in path or ']' not in path:
|
||||||
|
# Encontrar la variable original en la estructura jerárquica
|
||||||
|
path_parts = path.split('.')
|
||||||
|
current_node = updated_json["dbs"][db_idx]
|
||||||
|
|
||||||
|
# Navegar hasta el nodo padre para limpiar valores
|
||||||
|
path_found = True
|
||||||
|
for i in range(len(path_parts) - 1):
|
||||||
|
if "members" in current_node:
|
||||||
|
member_name = path_parts[i]
|
||||||
|
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
||||||
|
if matching_members:
|
||||||
|
current_node = matching_members[0]
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
elif "children" in current_node:
|
||||||
|
child_name = path_parts[i]
|
||||||
|
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
||||||
|
if matching_children:
|
||||||
|
current_node = matching_children[0]
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
|
||||||
|
if path_found and ("members" in current_node or "children" in current_node):
|
||||||
|
target_list = current_node.get("members", current_node.get("children", []))
|
||||||
|
target_name = path_parts[-1]
|
||||||
|
|
||||||
|
for target_var in target_list:
|
||||||
|
if target_var["name"] == target_name:
|
||||||
|
# Eliminar valores iniciales y actuales
|
||||||
|
if "initial_value" in target_var:
|
||||||
|
del target_var["initial_value"]
|
||||||
|
if "current_value" in target_var:
|
||||||
|
del target_var["current_value"]
|
||||||
|
if "current_element_values" in target_var:
|
||||||
|
del target_var["current_element_values"]
|
||||||
|
break
|
||||||
|
|
||||||
return updated_json
|
return updated_json
|
||||||
|
|
||||||
|
def process_updated_json(updated_json: Dict, updated_json_path: str, working_dir: str, documentation_dir: str, original_format_file: str):
|
||||||
|
"""
|
||||||
|
Genera los archivos markdown y S7 a partir del JSON actualizado, y copia el archivo S7
|
||||||
|
al directorio de trabajo con la extensión correcta.
|
||||||
|
"""
|
||||||
|
# Obtener nombre base y extensión original
|
||||||
|
format_file_name = os.path.basename(original_format_file)
|
||||||
|
base_name = format_file_name.replace("_format", "_updated").split('.')[0]
|
||||||
|
original_extension = os.path.splitext(format_file_name)[1] # .db o .awl
|
||||||
|
|
||||||
|
# Generar archivo markdown para documentación
|
||||||
|
for db in updated_json.get("dbs", []):
|
||||||
|
md_output_filename = os.path.join(documentation_dir, f"{base_name}.md")
|
||||||
|
try:
|
||||||
|
md_lines = []
|
||||||
|
md_lines.append(f"# Documentación S7 para {base_name}")
|
||||||
|
md_lines.append(f"_Fuente JSON: {os.path.basename(updated_json_path)}_")
|
||||||
|
md_lines.append("")
|
||||||
|
|
||||||
|
# Generar tabla markdown usando generate_markdown_table importado de x4
|
||||||
|
db_md_lines = generate_markdown_table(db)
|
||||||
|
md_lines.extend(db_md_lines)
|
||||||
|
|
||||||
|
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
||||||
|
for line in md_lines:
|
||||||
|
f.write(line + "\n")
|
||||||
|
print(f"Archivo Markdown generado: {md_output_filename}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error al generar Markdown para {base_name}: {e}")
|
||||||
|
|
||||||
|
# Generar archivo de código fuente S7
|
||||||
|
s7_txt_filename = os.path.join(documentation_dir, f"{base_name}.txt")
|
||||||
|
try:
|
||||||
|
s7_lines = generate_s7_source_code_lines(updated_json)
|
||||||
|
with open(s7_txt_filename, 'w', encoding='utf-8') as f:
|
||||||
|
for line in s7_lines:
|
||||||
|
f.write(line + "\n")
|
||||||
|
print(f"Archivo S7 generado: {s7_txt_filename}")
|
||||||
|
|
||||||
|
# Copiar al directorio de trabajo con la extensión original
|
||||||
|
s7_output_filename = os.path.join(working_dir, f"{base_name}{original_extension}")
|
||||||
|
shutil.copy2(s7_txt_filename, s7_output_filename)
|
||||||
|
print(f"Archivo S7 copiado a: {s7_output_filename}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error al generar archivo S7 para {base_name}: {e}")
|
||||||
|
|
||||||
|
def generate_comparison_excel(format_json: Dict, data_json: Dict, updated_json: Dict, excel_filename: str):
|
||||||
|
"""
|
||||||
|
Genera un archivo Excel con dos hojas que comparan los valores iniciales y actuales
|
||||||
|
entre los archivos format_json, data_json y updated_json.
|
||||||
|
Filtra STRUCTs y solo compara variables con valores reales.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
format_json: JSON con la estructura y nombres de formato
|
||||||
|
data_json: JSON con los datos source
|
||||||
|
updated_json: JSON con los datos actualizados
|
||||||
|
excel_filename: Ruta del archivo Excel a generar
|
||||||
|
"""
|
||||||
|
import openpyxl
|
||||||
|
from openpyxl.utils import get_column_letter
|
||||||
|
from openpyxl.styles import PatternFill, Font
|
||||||
|
|
||||||
|
# Crear un nuevo libro de Excel
|
||||||
|
workbook = openpyxl.Workbook()
|
||||||
|
|
||||||
|
# Definir estilos para resaltar diferencias
|
||||||
|
diff_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid") # Amarillo
|
||||||
|
header_font = Font(bold=True)
|
||||||
|
|
||||||
|
# Procesar cada DB
|
||||||
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
|
# Buscar los DBs correspondientes
|
||||||
|
db_name = format_db["name"]
|
||||||
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
|
updated_db = next((db for db in updated_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
|
|
||||||
|
if not data_db or not updated_db:
|
||||||
|
print(f"Error: No se encontró el DB '{db_name}' en alguno de los archivos JSON")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Crear hojas para valores iniciales y actuales para este DB
|
||||||
|
initial_sheet = workbook.active if db_idx == 0 else workbook.create_sheet()
|
||||||
|
initial_sheet.title = f"{db_name}_Initial"[:31] # Limitar longitud del nombre de hoja
|
||||||
|
|
||||||
|
current_sheet = workbook.create_sheet()
|
||||||
|
current_sheet.title = f"{db_name}_Current"[:31]
|
||||||
|
|
||||||
|
# Aplanar variables de los tres DBs
|
||||||
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
|
flat_updated_vars = flatten_db_structure(updated_db)
|
||||||
|
|
||||||
|
# Filtrar STRUCTs - solo trabajamos con variables que tienen valores reales
|
||||||
|
flat_format_vars = [var for var in flat_format_vars
|
||||||
|
if var["data_type"].upper() != "STRUCT" and not var.get("children")]
|
||||||
|
|
||||||
|
# Crear mapas de offset a variable para búsqueda rápida
|
||||||
|
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars
|
||||||
|
if var["data_type"].upper() != "STRUCT" and not var.get("children")}
|
||||||
|
updated_by_offset = {var["byte_offset"]: var for var in flat_updated_vars
|
||||||
|
if var["data_type"].upper() != "STRUCT" and not var.get("children")}
|
||||||
|
|
||||||
|
# Configurar encabezados para la hoja de valores iniciales
|
||||||
|
headers_initial = ["Address", "Name", "Type", "Format Initial", "Data Initial", "Updated Initial", "Difference"]
|
||||||
|
for col_num, header in enumerate(headers_initial, 1):
|
||||||
|
cell = initial_sheet.cell(row=1, column=col_num, value=header)
|
||||||
|
cell.font = header_font
|
||||||
|
|
||||||
|
# Configurar encabezados para la hoja de valores actuales
|
||||||
|
headers_current = ["Address", "Name", "Type", "Format Current", "Data Current", "Updated Current", "Difference"]
|
||||||
|
for col_num, header in enumerate(headers_current, 1):
|
||||||
|
cell = current_sheet.cell(row=1, column=col_num, value=header)
|
||||||
|
cell.font = header_font
|
||||||
|
|
||||||
|
# Llenar las hojas con datos
|
||||||
|
initial_row = 2
|
||||||
|
current_row = 2
|
||||||
|
|
||||||
|
for format_var in flat_format_vars:
|
||||||
|
offset = format_var["byte_offset"]
|
||||||
|
path = format_var["full_path"]
|
||||||
|
data_type = format_data_type_for_source(format_var)
|
||||||
|
address = format_var.get("address_display", format_address_for_display(offset, format_var.get("bit_size", 0)))
|
||||||
|
|
||||||
|
# Obtener variables correspondientes por offset
|
||||||
|
data_var = data_by_offset.get(offset)
|
||||||
|
updated_var = updated_by_offset.get(offset)
|
||||||
|
|
||||||
|
# Procesar valores iniciales (solo si la variable puede tener initial_value)
|
||||||
|
format_initial = format_var.get("initial_value", "")
|
||||||
|
data_initial = data_var.get("initial_value", "") if data_var else ""
|
||||||
|
updated_initial = updated_var.get("initial_value", "") if updated_var else ""
|
||||||
|
|
||||||
|
# Solo incluir en la hoja de valores iniciales si al menos uno tiene valor inicial
|
||||||
|
if format_initial or data_initial or updated_initial:
|
||||||
|
# Determinar si hay diferencias en valores iniciales
|
||||||
|
has_initial_diff = (format_initial != data_initial or
|
||||||
|
format_initial != updated_initial or
|
||||||
|
data_initial != updated_initial)
|
||||||
|
|
||||||
|
# Escribir datos de valores iniciales
|
||||||
|
initial_sheet.cell(row=initial_row, column=1, value=address)
|
||||||
|
initial_sheet.cell(row=initial_row, column=2, value=path)
|
||||||
|
initial_sheet.cell(row=initial_row, column=3, value=data_type)
|
||||||
|
initial_sheet.cell(row=initial_row, column=4, value=str(format_initial))
|
||||||
|
initial_sheet.cell(row=initial_row, column=5, value=str(data_initial))
|
||||||
|
initial_sheet.cell(row=initial_row, column=6, value=str(updated_initial))
|
||||||
|
|
||||||
|
# Resaltar diferencias en valores iniciales
|
||||||
|
if has_initial_diff:
|
||||||
|
initial_sheet.cell(row=initial_row, column=7, value="Sí")
|
||||||
|
for col in range(4, 7):
|
||||||
|
initial_sheet.cell(row=initial_row, column=col).fill = diff_fill
|
||||||
|
else:
|
||||||
|
initial_sheet.cell(row=initial_row, column=7, value="No")
|
||||||
|
|
||||||
|
initial_row += 1
|
||||||
|
|
||||||
|
# Procesar valores actuales
|
||||||
|
format_current = format_var.get("current_value", "")
|
||||||
|
data_current = data_var.get("current_value", "") if data_var else ""
|
||||||
|
updated_current = updated_var.get("current_value", "") if updated_var else ""
|
||||||
|
|
||||||
|
# Solo incluir en la hoja de valores actuales si al menos uno tiene valor actual
|
||||||
|
if format_current or data_current or updated_current:
|
||||||
|
# Determinar si hay diferencias en valores actuales
|
||||||
|
has_current_diff = (format_current != data_current or
|
||||||
|
format_current != updated_current or
|
||||||
|
data_current != updated_current)
|
||||||
|
|
||||||
|
# Escribir datos de valores actuales
|
||||||
|
current_sheet.cell(row=current_row, column=1, value=address)
|
||||||
|
current_sheet.cell(row=current_row, column=2, value=path)
|
||||||
|
current_sheet.cell(row=current_row, column=3, value=data_type)
|
||||||
|
current_sheet.cell(row=current_row, column=4, value=str(format_current))
|
||||||
|
current_sheet.cell(row=current_row, column=5, value=str(data_current))
|
||||||
|
current_sheet.cell(row=current_row, column=6, value=str(updated_current))
|
||||||
|
|
||||||
|
# Resaltar diferencias en valores actuales
|
||||||
|
if has_current_diff:
|
||||||
|
current_sheet.cell(row=current_row, column=7, value="Sí")
|
||||||
|
for col in range(4, 7):
|
||||||
|
current_sheet.cell(row=current_row, column=col).fill = diff_fill
|
||||||
|
else:
|
||||||
|
current_sheet.cell(row=current_row, column=7, value="No")
|
||||||
|
|
||||||
|
current_row += 1
|
||||||
|
|
||||||
|
# Si es un array, procesamos también sus elementos
|
||||||
|
if format_var.get("current_element_values") or (data_var and data_var.get("current_element_values")) or (updated_var and updated_var.get("current_element_values")):
|
||||||
|
format_elements = format_var.get("current_element_values", {})
|
||||||
|
data_elements = data_var.get("current_element_values", {}) if data_var else {}
|
||||||
|
updated_elements = updated_var.get("current_element_values", {}) if updated_var else {}
|
||||||
|
|
||||||
|
# Unir todos los índices disponibles
|
||||||
|
all_indices = set(list(format_elements.keys()) +
|
||||||
|
list(data_elements.keys()) +
|
||||||
|
list(updated_elements.keys()))
|
||||||
|
|
||||||
|
# Ordenar índices numéricamente
|
||||||
|
sorted_indices = sorted(all_indices, key=lambda x: [int(i) for i in x.split(',')]) if all_indices else []
|
||||||
|
|
||||||
|
for idx in sorted_indices:
|
||||||
|
elem_path = f"{path}[{idx}]"
|
||||||
|
|
||||||
|
# Valores actuales para elementos de array
|
||||||
|
format_elem_val = ""
|
||||||
|
if idx in format_elements:
|
||||||
|
if isinstance(format_elements[idx], dict) and "value" in format_elements[idx]:
|
||||||
|
format_elem_val = format_elements[idx]["value"]
|
||||||
|
else:
|
||||||
|
format_elem_val = format_elements[idx]
|
||||||
|
|
||||||
|
data_elem_val = ""
|
||||||
|
if idx in data_elements:
|
||||||
|
if isinstance(data_elements[idx], dict) and "value" in data_elements[idx]:
|
||||||
|
data_elem_val = data_elements[idx]["value"]
|
||||||
|
else:
|
||||||
|
data_elem_val = data_elements[idx]
|
||||||
|
|
||||||
|
updated_elem_val = ""
|
||||||
|
if idx in updated_elements:
|
||||||
|
if isinstance(updated_elements[idx], dict) and "value" in updated_elements[idx]:
|
||||||
|
updated_elem_val = updated_elements[idx]["value"]
|
||||||
|
else:
|
||||||
|
updated_elem_val = updated_elements[idx]
|
||||||
|
|
||||||
|
# Determinar si hay diferencias
|
||||||
|
has_elem_diff = (str(format_elem_val) != str(data_elem_val) or
|
||||||
|
str(format_elem_val) != str(updated_elem_val) or
|
||||||
|
str(data_elem_val) != str(updated_elem_val))
|
||||||
|
|
||||||
|
# Escribir datos de elementos de array (solo en hoja de valores actuales)
|
||||||
|
current_sheet.cell(row=current_row, column=1, value=address)
|
||||||
|
current_sheet.cell(row=current_row, column=2, value=elem_path)
|
||||||
|
current_sheet.cell(row=current_row, column=3, value=data_type.replace("ARRAY", "").strip())
|
||||||
|
current_sheet.cell(row=current_row, column=4, value=str(format_elem_val))
|
||||||
|
current_sheet.cell(row=current_row, column=5, value=str(data_elem_val))
|
||||||
|
current_sheet.cell(row=current_row, column=6, value=str(updated_elem_val))
|
||||||
|
|
||||||
|
# Resaltar diferencias
|
||||||
|
if has_elem_diff:
|
||||||
|
current_sheet.cell(row=current_row, column=7, value="Sí")
|
||||||
|
for col in range(4, 7):
|
||||||
|
current_sheet.cell(row=current_row, column=col).fill = diff_fill
|
||||||
|
else:
|
||||||
|
current_sheet.cell(row=current_row, column=7, value="No")
|
||||||
|
|
||||||
|
current_row += 1
|
||||||
|
|
||||||
|
# Auto-ajustar anchos de columna
|
||||||
|
for sheet in [initial_sheet, current_sheet]:
|
||||||
|
for col_idx, column_cells in enumerate(sheet.columns, 1):
|
||||||
|
max_length = 0
|
||||||
|
column = get_column_letter(col_idx)
|
||||||
|
for cell in column_cells:
|
||||||
|
try:
|
||||||
|
if len(str(cell.value)) > max_length:
|
||||||
|
max_length = len(str(cell.value))
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
adjusted_width = min(max_length + 2, 100) # Limitar ancho máximo
|
||||||
|
sheet.column_dimensions[column].width = adjusted_width
|
||||||
|
|
||||||
|
# Guardar el archivo Excel
|
||||||
|
try:
|
||||||
|
workbook.save(excel_filename)
|
||||||
|
print(f"Archivo de comparación Excel generado: {excel_filename}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error al escribir el archivo Excel {excel_filename}: {e}")
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
# Obtener directorio de trabajo
|
|
||||||
working_dir = find_working_directory()
|
working_dir = find_working_directory()
|
||||||
print(f"Using working directory: {working_dir}")
|
print(f"Using working directory: {working_dir}")
|
||||||
|
|
||||||
# Crear directorio para JSON si no existe
|
|
||||||
output_json_dir = os.path.join(working_dir, "json")
|
output_json_dir = os.path.join(working_dir, "json")
|
||||||
|
documentation_dir = os.path.join(working_dir, "documentation")
|
||||||
os.makedirs(output_json_dir, exist_ok=True)
|
os.makedirs(output_json_dir, exist_ok=True)
|
||||||
|
os.makedirs(documentation_dir, exist_ok=True)
|
||||||
print(f"Los archivos JSON se guardarán en: {output_json_dir}")
|
print(f"Los archivos JSON se guardarán en: {output_json_dir}")
|
||||||
|
print(f"Los archivos de documentación se guardarán en: {documentation_dir}")
|
||||||
|
|
||||||
# Buscar pares de archivos _data y _format
|
|
||||||
matched_pairs = find_matching_files(working_dir)
|
matched_pairs = find_matching_files(working_dir)
|
||||||
|
|
||||||
if not matched_pairs:
|
if not matched_pairs:
|
||||||
|
@ -357,35 +599,51 @@ def main():
|
||||||
# Parsear archivos a JSON
|
# Parsear archivos a JSON
|
||||||
data_json, format_json = parse_files_to_json(data_file, format_file, output_json_dir)
|
data_json, format_json = parse_files_to_json(data_file, format_file, output_json_dir)
|
||||||
|
|
||||||
# Aplanar variables por offset
|
# Verificar compatibilidad usando listas aplanadas
|
||||||
print("Aplanando variables por offset...")
|
all_compatible = True
|
||||||
data_offset_map = flatten_variables_by_offset(data_json)
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
format_offset_map = flatten_variables_by_offset(format_json)
|
# Buscar el DB correspondiente en data_json
|
||||||
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == format_db["name"]), None)
|
||||||
|
if not data_db:
|
||||||
|
print(f"Error: No se encontró DB '{format_db['name']}' en el archivo data")
|
||||||
|
all_compatible = False
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Aplanar variables de ambos DBs
|
||||||
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
|
|
||||||
|
print(f"Comparando estructuras para DB '{format_db['name']}': {len(flat_data_vars)} variables en _data, {len(flat_format_vars)} variables en _format")
|
||||||
|
compatible, issues = compare_structures_by_offset(flat_data_vars, flat_format_vars)
|
||||||
|
|
||||||
|
if not compatible:
|
||||||
|
all_compatible = False
|
||||||
|
print(f"\nSe encontraron problemas de compatibilidad en DB '{format_db['name']}':")
|
||||||
|
for issue in issues:
|
||||||
|
print(f" - {issue}")
|
||||||
|
print(f"Abortando el proceso para este DB.")
|
||||||
|
|
||||||
# Comparar estructuras usando offset como clave
|
if all_compatible:
|
||||||
print(f"Comparando estructuras: {len(data_offset_map)} variables en _data, {len(format_offset_map)} variables en _format")
|
print("\nLos archivos son compatibles. Creando el archivo _updated...")
|
||||||
compatible, issues = compare_structures_by_offset(data_offset_map, format_offset_map)
|
|
||||||
|
# Crear JSON actualizado
|
||||||
if not compatible:
|
updated_json = create_updated_json(data_json, format_json)
|
||||||
print("\nSe encontraron problemas de compatibilidad entre los archivos:")
|
|
||||||
for issue in issues:
|
# Guardar la versión actualizada
|
||||||
print(f" - {issue}")
|
base_name = os.path.basename(format_file).replace("_format", "").split('.')[0]
|
||||||
print("\nAbortando el proceso para este par de archivos.")
|
updated_json_path = os.path.join(output_json_dir, f"{base_name}_updated.json")
|
||||||
continue
|
|
||||||
|
with open(updated_json_path, "w", encoding='utf-8') as f:
|
||||||
print("\nLos archivos son compatibles. Creando el archivo _updated...")
|
json.dump(updated_json, f, default=custom_json_serializer, indent=2)
|
||||||
|
|
||||||
# Crear JSON actualizado usando el mapa de offsets de _data
|
print(f"Archivo _updated generado: {updated_json_path}")
|
||||||
updated_json = create_updated_json(data_json, format_json)
|
|
||||||
|
# Generar archivo de comparación Excel
|
||||||
# Guardar la versión actualizada
|
comparison_excel_path = os.path.join(documentation_dir, f"{base_name}_comparison.xlsx")
|
||||||
base_name = os.path.basename(format_file).replace("_format", "").split('.')[0]
|
generate_comparison_excel(format_json, data_json, updated_json, comparison_excel_path)
|
||||||
updated_json_path = os.path.join(output_json_dir, f"{base_name}_updated.json")
|
|
||||||
|
# Procesar el JSON actualizado para generar archivos Markdown y S7
|
||||||
with open(updated_json_path, "w", encoding='utf-8') as f:
|
process_updated_json(updated_json, updated_json_path, working_dir, documentation_dir, format_file)
|
||||||
json.dump(updated_json, f, default=custom_json_serializer, indent=2)
|
|
||||||
|
|
||||||
print(f"Archivo _updated generado: {updated_json_path}")
|
|
||||||
|
|
||||||
print("\n--- Proceso completado ---")
|
print("\n--- Proceso completado ---")
|
||||||
|
|
||||||
|
|
340
data/log.txt
340
data/log.txt
|
@ -1,319 +1,21 @@
|
||||||
[00:51:17] Iniciando ejecución de x7_value_updater.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
[02:56:24] Iniciando ejecución de x7_value_updater.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
||||||
[00:51:18] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
[02:56:24] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
[00:51:18] Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
[02:56:24] Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||||
[00:51:18] Se encontraron 1 pares de archivos para procesar.
|
[02:56:24] Los archivos de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
[00:51:18] --- Procesando par de archivos ---
|
[02:56:24] Se encontraron 1 pares de archivos para procesar.
|
||||||
[00:51:18] Data file: db1001_data.db
|
[02:56:24] --- Procesando par de archivos ---
|
||||||
[00:51:18] Format file: db1001_format.db
|
[02:56:24] Data file: db1001_data.db
|
||||||
[00:51:18] Parseando archivo data: db1001_data.db
|
[02:56:24] Format file: db1001_format.db
|
||||||
[00:51:18] Parseando archivo format: db1001_format.db
|
[02:56:24] Parseando archivo data: db1001_data.db
|
||||||
[00:51:18] Archivos JSON generados: db1001_data.json y db1001_format.json
|
[02:56:24] Parseando archivo format: db1001_format.db
|
||||||
[00:51:18] Aplanando variables por offset...
|
[02:56:24] Archivos JSON generados: db1001_data.json y db1001_format.json
|
||||||
[00:51:18] Comparando estructuras: 251 variables en _data, 251 variables en _format
|
[02:56:24] Comparando estructuras para DB 'HMI_Blender_Parameters': 284 variables en _data, 284 variables en _format
|
||||||
[00:51:18] Los archivos son compatibles. Creando el archivo _updated...
|
[02:56:24] Los archivos son compatibles. Creando el archivo _updated...
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT2 -> Processor_Options.Blender_OPT._ModelNum (offset 0.0)
|
[02:56:24] Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT3 -> Processor_Options.Blender_OPT._CO2_Offset (offset 2.0)
|
[02:56:25] Archivo de comparación Excel generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT4 -> Processor_Options.Blender_OPT._MaxSyrDeltaBrix (offset 6.0)
|
[02:56:25] Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT5 -> Processor_Options.Blender_OPT._BrixMeter (offset 10.0)
|
[02:56:25] Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT6 -> Processor_Options.Blender_OPT.Spare101 (offset 10.1)
|
[02:56:25] Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT7 -> Processor_Options.Blender_OPT._TrackH2OEnable (offset 10.2)
|
[02:56:25] --- Proceso completado ---
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT8 -> Processor_Options.Blender_OPT._PAmPDSType (offset 10.3)
|
[02:56:25] Ejecución de x7_value_updater.py finalizada (success). Duración: 0:00:00.761362.
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT9 -> Processor_Options.Blender_OPT._HistoricalTrends (offset 10.4)
|
[02:56:25] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x7_value_updater.txt
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT10 -> Processor_Options.Blender_OPT._PowerMeter (offset 10.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT11 -> Processor_Options.Blender_OPT._Report (offset 10.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT12 -> Processor_Options.Blender_OPT._Balaiage (offset 10.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT13 -> Processor_Options.Blender_OPT._Valves_FullFeedback (offset 11.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT14 -> Processor_Options.Blender_OPT._Valves_SingleFeedback (offset 11.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT15 -> Processor_Options.Blender_OPT._PumpsSafetySwitches (offset 11.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT16 -> Processor_Options.Blender_OPT._SurgeProtectionAct (offset 11.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT17 -> Processor_Options.Blender_OPT._DBC_Type (offset 11.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT18 -> Processor_Options.Blender_OPT._CO2InletMeter (offset 11.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT19 -> Processor_Options.Blender_OPT._ProductO2Meter (offset 11.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT20 -> Processor_Options.Blender_OPT._CopressedAirInletMeter (offset 11.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT21 -> Processor_Options.Blender_OPT._MeterType (offset 12.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT22 -> Processor_Options.Blender_OPT._MeterReceiveOnly (offset 14.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT23 -> Processor_Options.Blender_OPT._SyrBrixMeter (offset 14.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT24 -> Processor_Options.Blender_OPT._Flooding_Start_Up (offset 14.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT25 -> Processor_Options.Blender_OPT._FastChangeOverEnabled (offset 14.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT26 -> Processor_Options.Blender_OPT._WaterInletMeter (offset 14.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT27 -> Processor_Options.Blender_OPT._BlendFillSystem (offset 14.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT28 -> Processor_Options.Blender_OPT._TrackFillerSpeed (offset 14.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT29 -> Processor_Options.Blender_OPT._SignalExchange (offset 16.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT30 -> Processor_Options.Blender_OPT._CoolerPresent (offset 18.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT31 -> Processor_Options.Blender_OPT._CoolerControl (offset 20.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT32 -> Processor_Options.Blender_OPT._CoolerType (offset 22.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT33 -> Processor_Options.Blender_OPT._LocalCIP (offset 24.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT34 -> Processor_Options.Blender_OPT._ICS_CustomerHotWater (offset 24.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT35 -> Processor_Options.Blender_OPT._ICS_CustomerChemRecov (offset 24.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT36 -> Processor_Options.Blender_OPT._CIPSignalExchange (offset 24.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT37 -> Processor_Options.Blender_OPT._ICS_CustomerChemicals (offset 24.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT38 -> Processor_Options.Blender_OPT._CarboPresent (offset 24.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT39 -> Processor_Options.Blender_OPT._InverterSyrupPumpPPP302 (offset 24.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT40 -> Processor_Options.Blender_OPT._InverterWaterPumpPPN301 (offset 24.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT41 -> Processor_Options.Blender_OPT._DoubleDeair (offset 25.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT42 -> Processor_Options.Blender_OPT._DeairPreMixed (offset 25.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT43 -> Processor_Options.Blender_OPT._Deaireation (offset 25.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT44 -> Processor_Options.Blender_OPT._StillWaterByPass (offset 25.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT45 -> Processor_Options.Blender_OPT._ManifoldSetting (offset 25.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT46 -> Processor_Options.Blender_OPT._InverterProdPumpPPM303 (offset 25.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT47 -> Processor_Options.Blender_OPT._SidelCip (offset 25.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT48 -> Processor_Options.Blender_OPT._EthernetCom_CpuPN_CP (offset 25.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT49 -> Processor_Options.Blender_OPT._2ndOutlet (offset 26.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT50 -> Processor_Options.Blender_OPT._Promass (offset 28.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT51 -> Processor_Options.Blender_OPT._WaterPromass (offset 30.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT52 -> Processor_Options.Blender_OPT._ProductConductimeter (offset 30.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT53 -> Processor_Options.Blender_OPT._ICS_CustomerH2ORecov (offset 30.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT54 -> Processor_Options.Blender_OPT.Spare303 (offset 30.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT55 -> Processor_Options.Blender_OPT._CO2_GAS2_Injection (offset 30.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT56 -> Processor_Options.Blender_OPT._InverterVacuuPumpPPN304 (offset 30.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT57 -> Processor_Options.Blender_OPT._InverterBoostPumpPPM307 (offset 30.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT58 -> Processor_Options.Blender_OPT._RunOut_Water (offset 30.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT59 -> Processor_Options.Blender_OPT._FlowMeterType (offset 31.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT60 -> Processor_Options.Blender_OPT._SidelFiller (offset 31.1)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT61 -> Processor_Options.Blender_OPT._Simulation (offset 31.2)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT62 -> Processor_Options.Blender_OPT._ProductCoolingCTRL (offset 31.3)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT63 -> Processor_Options.Blender_OPT._ChillerCTRL (offset 31.4)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT64 -> Processor_Options.Blender_OPT._CO2_SterileFilter (offset 31.5)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT65 -> Processor_Options.Blender_OPT._InverterRecirPumpPPM306 (offset 31.6)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT66 -> Processor_Options.Blender_OPT._ProdPressReleaseRVM304 (offset 31.7)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT67 -> Processor_Options.Blender_OPT._VacuumPump (offset 32.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT68 -> Processor_Options.Blender_OPT._GAS2InjectionType (offset 34.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT69 -> Processor_Options.Blender_OPT._InjectionPress_Ctrl (offset 36.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT70 -> Processor_Options.Blender_OPT._ProdPressureType (offset 38.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT71 -> Processor_Options.Blender_OPT._CIPHeatType (offset 40.0)
|
|
||||||
[00:51:18] Mapeando STAT0.STAT1.STAT72 -> Processor_Options.Blender_OPT._EHS_NrRes (offset 42.0)
|
|
||||||
[00:51:18] Mapeando STAT73[1] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[2] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[3] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[4] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[5] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[6] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[7] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[8] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT73[9] -> Spare1 (offset 44.0)
|
|
||||||
[00:51:18] Mapeando STAT74 -> _RVM301_DeadBand (offset 62.0)
|
|
||||||
[00:51:18] Mapeando STAT75 -> _RVM301_Kp (offset 66.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT77 -> Actual_Recipe_Parameters._Name (offset 70.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT78 -> Actual_Recipe_Parameters._EnProdTemp (offset 104.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT79 -> Actual_Recipe_Parameters._SyrFlushing (offset 104.1)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT80 -> Actual_Recipe_Parameters._GAS2_Injection (offset 104.2)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT81 -> Actual_Recipe_Parameters._Eq_Pression_Selected (offset 104.3)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT82 -> Actual_Recipe_Parameters._DeoxStripEn (offset 104.4)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT83 -> Actual_Recipe_Parameters._DeoxVacuumEn (offset 104.5)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT84 -> Actual_Recipe_Parameters._DeoxPreMixed (offset 104.6)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT85 -> Actual_Recipe_Parameters._EnBlowOffProdPipeCO2Fil (offset 104.7)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT86 -> Actual_Recipe_Parameters._WaterSelection (offset 105.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT87 -> Actual_Recipe_Parameters._FillerNextRecipeNum (offset 106.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT88 -> Actual_Recipe_Parameters._BottleShape (offset 107.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT89 -> Actual_Recipe_Parameters._Type (offset 108.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT90 -> Actual_Recipe_Parameters._ProdMeterRecipeNum (offset 110.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT91 -> Actual_Recipe_Parameters._SyrupBrix (offset 112.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT92 -> Actual_Recipe_Parameters._SyrupDensity (offset 116.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT93 -> Actual_Recipe_Parameters._SyrupFactor (offset 120.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT94 -> Actual_Recipe_Parameters._ProductBrix (offset 124.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT95 -> Actual_Recipe_Parameters._ProductionRate (offset 128.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT96 -> Actual_Recipe_Parameters._Ratio (offset 132.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT97 -> Actual_Recipe_Parameters._ProdBrixOffset (offset 136.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT98 -> Actual_Recipe_Parameters._CO2Vols (offset 140.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT99 -> Actual_Recipe_Parameters._CO2Fact (offset 144.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT100 -> Actual_Recipe_Parameters._ProdTankPress (offset 148.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT101 -> Actual_Recipe_Parameters._SP_ProdTemp (offset 152.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT102 -> Actual_Recipe_Parameters._PrdTankMinLevel (offset 156.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT103 -> Actual_Recipe_Parameters._WaterValveSave (offset 160.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT104 -> Actual_Recipe_Parameters._SyrupValveSave (offset 164.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT105 -> Actual_Recipe_Parameters._CarboCO2ValveSave (offset 168.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT106 -> Actual_Recipe_Parameters._ProdMeterHighBrix (offset 172.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT107 -> Actual_Recipe_Parameters._ProdMeterLowBrix (offset 176.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT108 -> Actual_Recipe_Parameters._ProdMeterHighCO2 (offset 180.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT109 -> Actual_Recipe_Parameters._ProdMeterLowCO2 (offset 184.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT110 -> Actual_Recipe_Parameters._ProdMeter_ZeroCO2 (offset 188.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT111 -> Actual_Recipe_Parameters._ProdMeter_ZeroBrix (offset 192.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT112 -> Actual_Recipe_Parameters._ProdHighCond (offset 196.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT113 -> Actual_Recipe_Parameters._ProdLowCond (offset 200.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT114 -> Actual_Recipe_Parameters._BottleSize (offset 204.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT115 -> Actual_Recipe_Parameters._FillingValveHead_SP (offset 208.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT116 -> Actual_Recipe_Parameters._SyrMeter_ZeroBrix (offset 212.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT117 -> Actual_Recipe_Parameters._FirstProdExtraCO2Fact (offset 216.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT118 -> Actual_Recipe_Parameters._Gas2Vols (offset 220.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT119 -> Actual_Recipe_Parameters._Gas2Fact (offset 224.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT120 -> Actual_Recipe_Parameters._SyrupPumpPressure (offset 228.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT121 -> Actual_Recipe_Parameters._WaterPumpPressure (offset 232.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT122 -> Actual_Recipe_Parameters._CO2_Air_N2_PressSelect (offset 236.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT123 -> Actual_Recipe_Parameters._KFactRVM304BlowOff (offset 238.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT124 -> Actual_Recipe_Parameters._ProdRecircPumpFreq (offset 242.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT125 -> Actual_Recipe_Parameters._ProdBoosterPumpPress (offset 246.0)
|
|
||||||
[00:51:18] Mapeando STAT76.STAT126 -> Actual_Recipe_Parameters._ProdSendPumpFreq (offset 250.0)
|
|
||||||
[00:51:18] Mapeando STAT127[1] -> Spare2 (offset 254.0)
|
|
||||||
[00:51:18] Mapeando STAT127[2] -> Spare2 (offset 254.0)
|
|
||||||
[00:51:18] Mapeando STAT127[3] -> Spare2 (offset 254.0)
|
|
||||||
[00:51:18] Mapeando STAT127[4] -> Spare2 (offset 254.0)
|
|
||||||
[00:51:18] Mapeando STAT127[5] -> Spare2 (offset 254.0)
|
|
||||||
[00:51:18] Mapeando STAT128 -> Next_Recipe_Name (offset 264.0)
|
|
||||||
[00:51:18] Mapeando STAT129 -> Next_Recipe_Number (offset 298.0)
|
|
||||||
[00:51:18] Mapeando STAT130[1] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[2] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[3] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[4] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[5] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[6] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[7] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[8] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[9] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[10] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[11] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[12] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[13] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[14] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[15] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[16] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[17] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT130[18] -> Spare3 (offset 300.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT132 -> ProcessSetup.Spare000 (offset 336.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT133 -> ProcessSetup.Spare040 (offset 340.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT134 -> ProcessSetup._KWaterLoss (offset 344.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT135 -> ProcessSetup._KSyrupLoss (offset 348.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT136 -> ProcessSetup._KProdLoss (offset 352.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT137 -> ProcessSetup._KPPM303 (offset 356.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT138 -> ProcessSetup._BaialageRVM301OVMin (offset 360.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT139 -> ProcessSetup._SyrupLinePressure (offset 364.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT140 -> ProcessSetup._CIPRMM301OV (offset 368.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT141 -> ProcessSetup._CIPRMP302OV (offset 372.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT142 -> ProcessSetup._CIPTM301MinLevel (offset 376.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT143 -> ProcessSetup._CIPTM301MaxLevel (offset 380.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT144 -> ProcessSetup._CIPPPM303Freq (offset 384.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT145 -> ProcessSetup._CIPTP301MinLevel (offset 388.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT146 -> ProcessSetup._CIPTP301MaxLevel (offset 392.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT147 -> ProcessSetup._RinseRMM301OV (offset 396.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT148 -> ProcessSetup._RinseRMP302OV (offset 400.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT149 -> ProcessSetup._RinseTM301Press (offset 404.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT150 -> ProcessSetup._RinsePPM303Freq (offset 408.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT151 -> ProcessSetup._DrainTM301Press (offset 412.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT152 -> ProcessSetup._KRecBlendError (offset 416.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT153 -> ProcessSetup._KRecCarboCO2Error (offset 420.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT154 -> ProcessSetup._MaxBlendError (offset 424.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT155 -> ProcessSetup._MaxCarboCO2Error (offset 428.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT156 -> ProcessSetup._StartUpBrixExtraWater (offset 432.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT157 -> ProcessSetup._StartUpCO2ExtraWater (offset 436.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT158 -> ProcessSetup._StartUpPPM303Freq (offset 440.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT159 -> ProcessSetup._SyrupRoomTank (offset 444.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT160 -> ProcessSetup._SyrupRunOutLiters (offset 446.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT161 -> ProcessSetup._InjCO2Press_Offset (offset 450.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT162 -> ProcessSetup._InjCO2Press_MinFlow (offset 454.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT163 -> ProcessSetup._InjCO2Press_MaxFlow (offset 458.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT164 -> ProcessSetup._CarboCO2Pressure (offset 462.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT165 -> ProcessSetup._N2MinPressure (offset 466.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT166 -> ProcessSetup._DiffSensor_Height (offset 470.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT167 -> ProcessSetup._DiffSensor_DeltaHeight (offset 474.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT168 -> ProcessSetup._DiffSensor_Offset (offset 478.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT169 -> ProcessSetup._FillingValveHeight (offset 482.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT170 -> ProcessSetup._FillerDiameter (offset 486.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT171 -> ProcessSetup._FillingValveNum (offset 490.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT172 -> ProcessSetup._FillerProdPipeDN (offset 492.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT173 -> ProcessSetup._FillerProdPipeMass (offset 496.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT174 -> ProcessSetup._FillingTime (offset 500.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT175 -> ProcessSetup._TM301Height_0 (offset 504.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT176 -> ProcessSetup._TM301LevelPerc_2 (offset 508.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT177 -> ProcessSetup._TM301Height_2 (offset 512.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT178 -> ProcessSetup._RVN304Factor (offset 516.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT179 -> ProcessSetup._DrainTM301Flushing (offset 520.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT180 -> ProcessSetup._FirstProdExtraBrix (offset 524.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT181 -> ProcessSetup._FirstProdDietExtraSyr (offset 528.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT182 -> ProcessSetup._EndProdLastSyrlt (offset 532.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT183 -> ProcessSetup._TM301DrainSt0Time (offset 536.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT184 -> ProcessSetup._TM301DrainSt1Time (offset 538.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT185 -> ProcessSetup._ProdPipeRunOutSt0Time (offset 540.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT186 -> ProcessSetup._RMM301ProdPipeRunOu (offset 542.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT187 -> ProcessSetup._RMP302ProdPipeRunOu (offset 546.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT188 -> ProcessSetup._ProdPipeRunOutAmount (offset 550.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT189 -> ProcessSetup._TM301RunOutChiller (offset 554.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT190 -> ProcessSetup._MinSpeedNominalProd (offset 558.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT191 -> ProcessSetup._MinSpeedSlowProd (offset 562.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT192 -> ProcessSetup._FastChgOvrTM301DrnPrss (offset 566.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT193 -> ProcessSetup._CIPTN301MinLevel (offset 570.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT194 -> ProcessSetup._CIPTN301MaxLevel (offset 574.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT195 -> ProcessSetup._ProdPPN304Freq (offset 578.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT196 -> ProcessSetup._GAS2InjectionPress (offset 582.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT197 -> ProcessSetup._BaialageRVM301OVMax (offset 586.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT198 -> ProcessSetup._RinsePPN301Freq (offset 590.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT199 -> ProcessSetup._CIPPPN301Freq (offset 594.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT200 -> ProcessSetup._RinsePPP302Freq (offset 598.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT201 -> ProcessSetup._CIPPPP302Freq (offset 602.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT202 -> ProcessSetup._PercSyrupBrixSyrStarUp (offset 606.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT203 -> ProcessSetup._RefTempCoolingCTRL (offset 610.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT204 -> ProcessSetup._H2OSerpPrimingVolume (offset 614.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT205 -> ProcessSetup._AVN301_Nozzle_Kv (offset 618.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT206 -> ProcessSetup._AVN302_Nozzle_Kv (offset 622.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT207 -> ProcessSetup._AVN303_Nozzle_Kv (offset 626.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT208 -> ProcessSetup._DeoxSpryball_Kv (offset 630.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT209 -> ProcessSetup._PremixedLineDrainTime (offset 634.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT210 -> ProcessSetup._PPN301_H_MaxFlow (offset 636.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT211 -> ProcessSetup._PPN301_H_MinFlow (offset 640.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT212 -> ProcessSetup._PPN301_MaxFlow (offset 644.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT213 -> ProcessSetup._PPN301_MinFlow (offset 648.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT214 -> ProcessSetup._PPP302_H_MaxFlow (offset 652.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT215 -> ProcessSetup._PPP302_H_MinFlow (offset 656.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT216 -> ProcessSetup._PPP302_MaxFlow (offset 660.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT217 -> ProcessSetup._PPP302_MinFlow (offset 664.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT218 -> ProcessSetup._RinsePPM306Freq (offset 668.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT219 -> ProcessSetup._CIPPPM306Freq (offset 672.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT220 -> ProcessSetup._PPM307_H_MaxFlow (offset 676.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT221 -> ProcessSetup._PPM307_H_MinFlow (offset 680.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT222 -> ProcessSetup._PPM307_MaxFlow (offset 684.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT223 -> ProcessSetup._PPM307_MinFlow (offset 688.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT224 -> ProcessSetup._Temp0_VacuumCtrl (offset 692.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT225 -> ProcessSetup._Temp1_VacuumCtrl (offset 696.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT226 -> ProcessSetup._Temp2_VacuumCtrl (offset 700.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT227 -> ProcessSetup._Temp3_VacuumCtrl (offset 704.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT228 -> ProcessSetup._Temp4_VacuumCtrl (offset 708.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT229 -> ProcessSetup._T1_VacuumCtrl (offset 712.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT230 -> ProcessSetup._T2_VacuumCtrl (offset 716.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT231 -> ProcessSetup._T3_VacuumCtrl (offset 720.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT232 -> ProcessSetup._T4_VacuumCtrl (offset 724.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT233 -> ProcessSetup._ICS_VolDosWorkTimePAA (offset 728.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT234 -> ProcessSetup._ICS_VolPauseTimePAA (offset 730.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT235 -> ProcessSetup._ICS_PAAPulseWeight (offset 732.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT236 -> ProcessSetup._ICS_CausticPulseWeight (offset 734.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT237 -> ProcessSetup._ICS_AcidPulseWeight (offset 736.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT238 -> ProcessSetup._ICS_VolumeRestOfLine (offset 738.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT239 -> ProcessSetup._ICS_VolDosWorkTimeCaus (offset 742.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT240 -> ProcessSetup._ICS_VolDosPauseTimeCaus (offset 744.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT241 -> ProcessSetup._ICS_VolDosWorkTimeAcid (offset 746.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT242 -> ProcessSetup._ICS_VolDosPauseTimeAcid (offset 748.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT243 -> ProcessSetup._ICS_ConcDosWorkTimeCaus (offset 750.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT244 -> ProcessSetup._ICS_ConcDosPausTimeCaus (offset 752.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT245 -> ProcessSetup._ICS_ConcDosWorkTimeAcid (offset 754.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT246 -> ProcessSetup._ICS_ConcDosPausTimeAcid (offset 756.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT247 -> ProcessSetup._RinsePPM307Freq (offset 758.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT248 -> ProcessSetup._CIPPPM307Freq (offset 762.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT249 -> ProcessSetup._CIP2StepTN301Lvl (offset 766.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT250 -> ProcessSetup._CIP2StepTM301Lvl (offset 770.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT251 -> ProcessSetup._CIP2StepTP301Lvl (offset 774.0)
|
|
||||||
[00:51:18] Mapeando STAT131.STAT252 -> ProcessSetup._PumpNominalFreq (offset 778.0)
|
|
||||||
[00:51:18] Mapeando STAT253 -> _SwitchOff_DensityOK (offset 782.0)
|
|
||||||
[00:51:18] Mapeando STAT254 -> STAT254 (offset 784.0)
|
|
||||||
[00:51:18] Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
|
||||||
[00:51:18] --- Proceso completado ---
|
|
||||||
[00:51:18] Ejecución de x7_value_updater.py finalizada (success). Duración: 0:00:00.223903.
|
|
||||||
[00:51:18] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x7_value_updater.txt
|
|
||||||
[00:51:35] Iniciando ejecución de x4.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
|
||||||
[00:51:35] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
|
||||||
[00:51:35] Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
|
||||||
[00:51:35] Archivos JSON encontrados para procesar: 3
|
|
||||||
[00:51:35] --- Procesando archivo JSON: db1001_data.json ---
|
|
||||||
[00:51:35] Archivo JSON 'db1001_data.json' cargado correctamente.
|
|
||||||
[00:51:35] INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
[00:51:35] Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.txt
|
|
||||||
[00:51:35] Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.md
|
|
||||||
[00:51:35] --- Procesando archivo JSON: db1001_format.json ---
|
|
||||||
[00:51:35] Archivo JSON 'db1001_format.json' cargado correctamente.
|
|
||||||
[00:51:35] INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
[00:51:35] Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
|
||||||
[00:51:35] Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
|
||||||
[00:51:35] --- Procesando archivo JSON: db1001_updated.json ---
|
|
||||||
[00:51:35] Archivo JSON 'db1001_updated.json' cargado correctamente.
|
|
||||||
[00:51:35] INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
|
||||||
[00:51:35] Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
|
||||||
[00:51:35] Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
|
||||||
[00:51:35] --- Proceso de generación de documentación completado ---
|
|
||||||
[00:51:35] Ejecución de x4.py finalizada (success). Duración: 0:00:00.110751.
|
|
||||||
[00:51:35] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x4.txt
|
|
||||||
|
|
Loading…
Reference in New Issue