Creado x7 para actualizar los valores
This commit is contained in:
parent
f76f593fef
commit
00f3b6d2ec
|
@ -0,0 +1,36 @@
|
|||
--- Log de Ejecución: x4.py ---
|
||||
Grupo: S7_DB_Utils
|
||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Inicio: 2025-05-17 21:37:42
|
||||
Fin: 2025-05-17 21:37:42
|
||||
Duración: 0:00:00.131741
|
||||
Estado: SUCCESS (Código de Salida: 0)
|
||||
|
||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||
Archivos JSON encontrados para procesar: 3
|
||||
|
||||
--- Procesando archivo JSON: db1001_data.json ---
|
||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.txt
|
||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.md
|
||||
|
||||
--- Procesando archivo JSON: db1001_format.json ---
|
||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
||||
|
||||
--- Procesando archivo JSON: db1001_format_updated.json ---
|
||||
Archivo JSON 'db1001_format_updated.json' cargado correctamente.
|
||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.txt
|
||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.md
|
||||
|
||||
--- Proceso de generación de documentación completado ---
|
||||
|
||||
--- ERRORES (STDERR) ---
|
||||
Ninguno
|
||||
--- FIN DEL LOG ---
|
|
@ -0,0 +1,30 @@
|
|||
--- Log de Ejecución: x5.py ---
|
||||
Grupo: S7_DB_Utils
|
||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Inicio: 2025-05-17 21:51:25
|
||||
Fin: 2025-05-17 21:51:25
|
||||
Duración: 0:00:00.099104
|
||||
Estado: SUCCESS (Código de Salida: 0)
|
||||
|
||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Los archivos Markdown de descripción se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||
Archivos JSON encontrados para procesar: 3
|
||||
|
||||
--- Procesando archivo JSON para descripción: db1001_data.json ---
|
||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data_description.md
|
||||
|
||||
--- Procesando archivo JSON para descripción: db1001_format.json ---
|
||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_description.md
|
||||
|
||||
--- Procesando archivo JSON para descripción: db1001_format_updated.json ---
|
||||
Archivo JSON 'db1001_format_updated.json' cargado correctamente.
|
||||
Documentación Markdown completa generada: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated_description.md
|
||||
|
||||
--- Proceso de generación de descripciones Markdown completado ---
|
||||
|
||||
--- ERRORES (STDERR) ---
|
||||
Ninguno
|
||||
--- FIN DEL LOG ---
|
|
@ -0,0 +1,33 @@
|
|||
--- Log de Ejecución: x6.py ---
|
||||
Grupo: S7_DB_Utils
|
||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Inicio: 2025-05-17 22:05:32
|
||||
Fin: 2025-05-17 22:05:33
|
||||
Duración: 0:00:00.614471
|
||||
Estado: SUCCESS (Código de Salida: 0)
|
||||
|
||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Los archivos Excel de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||
Archivos JSON encontrados para procesar: 3
|
||||
|
||||
--- Procesando archivo JSON para Excel: db1001_data.json ---
|
||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_data.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
||||
|
||||
--- Procesando archivo JSON para Excel: db1001_format.json ---
|
||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
||||
|
||||
--- Procesando archivo JSON para Excel: db1001_format_updated.json ---
|
||||
Archivo JSON 'db1001_format_updated.json' cargado correctamente.
|
||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format_updated.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.json.xlsx
|
||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format_updated.json.xlsx
|
||||
|
||||
--- Proceso de generación de documentación Excel completado ---
|
||||
|
||||
--- ERRORES (STDERR) ---
|
||||
Ninguno
|
||||
--- FIN DEL LOG ---
|
|
@ -0,0 +1,29 @@
|
|||
--- Log de Ejecución: x7_value_updater.py ---
|
||||
Grupo: S7_DB_Utils
|
||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Inicio: 2025-05-17 23:48:43
|
||||
Fin: 2025-05-17 23:48:43
|
||||
Duración: 0:00:00.106052
|
||||
Estado: SUCCESS (Código de Salida: 0)
|
||||
|
||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
Found _data file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_data.db
|
||||
Found _format file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_format.db
|
||||
Parsing S7 file: db1001_data.db...
|
||||
Serializing to JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data_data.json
|
||||
JSON saved: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data_data.json
|
||||
Parsing S7 file: db1001_format.db...
|
||||
Serializing to JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_format.json
|
||||
JSON saved: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_format.json
|
||||
Comparing structure of DB: HMI_Blender_Parameters
|
||||
La estructura del DB 'HMI_Blender_Parameters' es compatible.
|
||||
|
||||
All DB structures are compatible. Proceeding to generate _updated file.
|
||||
INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
||||
|
||||
Successfully generated _updated S7 file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||
|
||||
--- ERRORES (STDERR) ---
|
||||
Ninguno
|
||||
--- FIN DEL LOG ---
|
|
@ -18,25 +18,31 @@
|
|||
"hidden": true
|
||||
},
|
||||
"x3.py": {
|
||||
"display_name": "x3",
|
||||
"short_description": "Sin descripción corta.",
|
||||
"display_name": "03: Parse DB/AWL",
|
||||
"short_description": "Crear archivos json haciendo parsing de los archivos .db o .awl",
|
||||
"long_description": "",
|
||||
"hidden": false
|
||||
},
|
||||
"x4.py": {
|
||||
"display_name": "x4",
|
||||
"short_description": "Sin descripción corta.",
|
||||
"long_description": "",
|
||||
"display_name": "04: Generar S7 Source y MD",
|
||||
"short_description": "Genera código S7 (.txt) y Markdown (.md) desde JSON.",
|
||||
"long_description": "Procesa archivos JSON (generados por x3.py) para reconstruir el código fuente S7 en formato .txt y generar documentación detallada en formato Markdown para cada Bloque de Datos (DB) contenido en el JSON.",
|
||||
"hidden": false
|
||||
},
|
||||
"x5.py": {
|
||||
"display_name": "x5",
|
||||
"short_description": "Sin descripción corta.",
|
||||
"long_description": "",
|
||||
"display_name": "05: Generar Descripción MD del JSON",
|
||||
"short_description": "Genera documentación descriptiva de archivos JSON en Markdown.",
|
||||
"long_description": "Crea un archivo Markdown que documenta la estructura interna de los archivos JSON (generados por x3.py). Detalla UDTs y DBs, incluyendo sus miembros, offsets, tipos de datos, y valores iniciales/actuales, facilitando la comprensión del contenido del JSON.",
|
||||
"hidden": false
|
||||
},
|
||||
"x6.py": {
|
||||
"display_name": "x6",
|
||||
"display_name": "06: Generar Excel desde JSON",
|
||||
"short_description": "Genera documentación de DBs en formato Excel (.xlsx) desde JSON.",
|
||||
"long_description": "Procesa archivos JSON (generados por x3.py) y exporta la información de cada Bloque de Datos (DB) a un archivo Excel (.xlsx). La hoja de cálculo incluye detalles como direcciones, nombres de variables, tipos de datos, valores iniciales, valores actuales y comentarios.",
|
||||
"hidden": false
|
||||
},
|
||||
"x7_value_updater.py": {
|
||||
"display_name": "x7_value_updater",
|
||||
"short_description": "Sin descripción corta.",
|
||||
"long_description": "",
|
||||
"hidden": false
|
||||
|
|
|
@ -1,6 +1,9 @@
|
|||
# --- x4.py (Modificaciones v_final_2) ---
|
||||
import json
|
||||
from typing import List, Dict, Any
|
||||
import sys
|
||||
import os
|
||||
import glob # Para buscar archivos JSON
|
||||
|
||||
script_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
|
@ -122,7 +125,7 @@ def generate_s7_source_code_lines(data: Dict[str, Any]) -> List[str]:
|
|||
# generate_markdown_table (sin cambios respecto a la v5)
|
||||
def generate_markdown_table(db_info: Dict[str, Any]) -> List[str]:
|
||||
lines = []
|
||||
lines.append(f"# Documentación para DB: {db_info['name']}")
|
||||
lines.append(f"## Documentación para DB: {db_info['name']}") # Cambiado a H2 para múltiples DBs por archivo
|
||||
lines.append("")
|
||||
lines.append("| Address | Name | Type | Initial Value | Actual Value | Comment |")
|
||||
lines.append("|---|---|---|---|---|---|")
|
||||
|
@ -158,34 +161,75 @@ def main():
|
|||
working_dir = find_working_directory()
|
||||
print(f"Using working directory: {working_dir}")
|
||||
|
||||
json_input_filename = "parsed_s7_data_stat.json" # Espera el JSON de x3_v_final_2
|
||||
s7_output_filename = "reconstructed_s7_source.txt"
|
||||
|
||||
try:
|
||||
with open(json_input_filename, 'r', encoding='utf-8') as f: data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{json_input_filename}' cargado correctamente.")
|
||||
except Exception as e:
|
||||
print(f"Error al cargar/leer {json_input_filename}: {e}"); return
|
||||
input_json_dir = os.path.join(working_dir, "json")
|
||||
documentation_dir = os.path.join(working_dir, "documentation")
|
||||
os.makedirs(documentation_dir, exist_ok=True)
|
||||
print(f"Los archivos de documentación generados se guardarán en: {documentation_dir}")
|
||||
|
||||
s7_code_lines = generate_s7_source_code_lines(data_from_json)
|
||||
try:
|
||||
with open(s7_output_filename, 'w', encoding='utf-8') as f:
|
||||
for line in s7_code_lines: f.write(line + "\n")
|
||||
print(f"Archivo S7 reconstruido generado: {s7_output_filename}")
|
||||
except Exception as e: print(f"Error al escribir el archivo S7 {s7_output_filename}: {e}")
|
||||
json_files_to_process = glob.glob(os.path.join(input_json_dir, "*.json"))
|
||||
|
||||
if data_from_json.get("dbs"):
|
||||
for db_to_document in data_from_json["dbs"]:
|
||||
db_name_safe = db_to_document['name'].replace('"', '').replace(' ', '_').replace('/','_')
|
||||
md_filename_specific = f"documentation_db_{db_name_safe}.md"
|
||||
print(f"\nGenerando documentación Markdown para DB: {db_to_document['name']}...")
|
||||
markdown_lines = generate_markdown_table(db_to_document)
|
||||
if not json_files_to_process:
|
||||
print(f"No se encontraron archivos .json en {input_json_dir}")
|
||||
return
|
||||
|
||||
print(f"Archivos JSON encontrados para procesar: {len(json_files_to_process)}")
|
||||
|
||||
for json_input_filepath in json_files_to_process:
|
||||
json_filename_base = os.path.splitext(os.path.basename(json_input_filepath))[0]
|
||||
current_json_filename = os.path.basename(json_input_filepath)
|
||||
print(f"\n--- Procesando archivo JSON: {current_json_filename} ---")
|
||||
|
||||
s7_output_filename = os.path.join(documentation_dir, f"{json_filename_base}.txt")
|
||||
md_output_filename = os.path.join(documentation_dir, f"{json_filename_base}.md")
|
||||
|
||||
try:
|
||||
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
||||
data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||
except Exception as e:
|
||||
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
||||
continue # Saltar al siguiente archivo JSON
|
||||
|
||||
# Generar archivo S7 (.txt)
|
||||
s7_code_lines = generate_s7_source_code_lines(data_from_json)
|
||||
try:
|
||||
with open(s7_output_filename, 'w', encoding='utf-8') as f:
|
||||
for line in s7_code_lines:
|
||||
f.write(line + "\n")
|
||||
print(f"Archivo S7 reconstruido generado: {s7_output_filename}")
|
||||
except Exception as e:
|
||||
print(f"Error al escribir el archivo S7 {s7_output_filename}: {e}")
|
||||
|
||||
# Generar archivo Markdown (.md) para todos los DBs en este JSON
|
||||
all_db_markdown_lines = []
|
||||
if data_from_json.get("dbs"):
|
||||
all_db_markdown_lines.append(f"# Documentación S7 para {json_filename_base}")
|
||||
all_db_markdown_lines.append(f"_Fuente JSON: {current_json_filename}_")
|
||||
all_db_markdown_lines.append("")
|
||||
|
||||
for db_index, db_to_document in enumerate(data_from_json["dbs"]):
|
||||
if db_index > 0:
|
||||
all_db_markdown_lines.append("\n---\n") # Separador visual entre DBs
|
||||
|
||||
markdown_lines_for_one_db = generate_markdown_table(db_to_document)
|
||||
all_db_markdown_lines.extend(markdown_lines_for_one_db)
|
||||
all_db_markdown_lines.append("") # Espacio después de cada tabla de DB
|
||||
|
||||
try:
|
||||
with open(md_filename_specific, 'w', encoding='utf-8') as f:
|
||||
for line in markdown_lines: f.write(line + "\n")
|
||||
print(f"Archivo Markdown de documentación generado: {md_filename_specific}")
|
||||
except Exception as e: print(f"Error al escribir {md_filename_specific}: {e}")
|
||||
else: print("No se encontraron DBs en el archivo JSON para generar documentación.")
|
||||
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
||||
for line in all_db_markdown_lines:
|
||||
f.write(line + "\n")
|
||||
print(f"Archivo Markdown de documentación generado: {md_output_filename}")
|
||||
except Exception as e:
|
||||
print(f"Error al escribir el archivo Markdown {md_output_filename}: {e}")
|
||||
else:
|
||||
print(f"No se encontraron DBs en {current_json_filename} para generar documentación Markdown.")
|
||||
# Opcionalmente, crear un archivo MD con un mensaje
|
||||
with open(md_output_filename, 'w', encoding='utf-8') as f:
|
||||
f.write(f"# Documentación S7 para {json_filename_base}\n\n_Fuente JSON: {current_json_filename}_\n\nNo se encontraron Bloques de Datos (DBs) en este archivo JSON.\n")
|
||||
print(f"Archivo Markdown generado (sin DBs): {md_output_filename}")
|
||||
|
||||
print("\n--- Proceso de generación de documentación completado ---")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,5 +1,9 @@
|
|||
import json
|
||||
from typing import List, Dict, Any, Optional
|
||||
import sys
|
||||
import os
|
||||
import glob # Para buscar archivos JSON
|
||||
from datetime import datetime # Mover import al inicio
|
||||
|
||||
script_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
|
@ -184,26 +188,44 @@ def generate_json_documentation(data: Dict[str, Any], output_filename: str):
|
|||
if __name__ == "__main__":
|
||||
working_dir = find_working_directory()
|
||||
print(f"Using working directory: {working_dir}")
|
||||
|
||||
from datetime import datetime # Mover import aquí
|
||||
|
||||
# Asume que el JSON es generado por la última versión de x3.py
|
||||
# (ej. parsed_s7_data_x3_v_final_3.json o el nombre que uses)
|
||||
json_input_filename = "parsed_s7_data.json"
|
||||
markdown_output_filename = "documentacion_json_s7.md"
|
||||
|
||||
try:
|
||||
with open(json_input_filename, 'r', encoding='utf-8') as f:
|
||||
data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{json_input_filename}' cargado correctamente.")
|
||||
except FileNotFoundError:
|
||||
print(f"Error: No se encontró el archivo JSON de entrada: {json_input_filename}")
|
||||
exit()
|
||||
except json.JSONDecodeError:
|
||||
print(f"Error: El archivo JSON de entrada no es válido: {json_input_filename}")
|
||||
exit()
|
||||
except Exception as e:
|
||||
print(f"Error al leer el archivo JSON {json_input_filename}: {e}")
|
||||
exit()
|
||||
input_json_dir = os.path.join(working_dir, "json")
|
||||
documentation_dir = os.path.join(working_dir, "documentation")
|
||||
os.makedirs(documentation_dir, exist_ok=True)
|
||||
print(f"Los archivos Markdown de descripción se guardarán en: {documentation_dir}")
|
||||
|
||||
generate_json_documentation(data_from_json, markdown_output_filename)
|
||||
json_files_to_process = glob.glob(os.path.join(input_json_dir, "*.json"))
|
||||
|
||||
if not json_files_to_process:
|
||||
print(f"No se encontraron archivos .json en {input_json_dir}")
|
||||
else:
|
||||
|
||||
print(f"Archivos JSON encontrados para procesar: {len(json_files_to_process)}")
|
||||
|
||||
for json_input_filepath in json_files_to_process:
|
||||
json_filename_base = os.path.splitext(os.path.basename(json_input_filepath))[0]
|
||||
current_json_filename = os.path.basename(json_input_filepath)
|
||||
print(f"\n--- Procesando archivo JSON para descripción: {current_json_filename} ---")
|
||||
|
||||
markdown_output_filename = os.path.join(documentation_dir, f"{json_filename_base}_description.md")
|
||||
|
||||
try:
|
||||
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
||||
data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||
except FileNotFoundError:
|
||||
print(f"Error: No se encontró el archivo JSON de entrada: {json_input_filepath}")
|
||||
continue
|
||||
except json.JSONDecodeError:
|
||||
print(f"Error: El archivo JSON de entrada no es válido: {json_input_filepath}")
|
||||
continue
|
||||
except Exception as e:
|
||||
print(f"Error al leer el archivo JSON {json_input_filepath}: {e}")
|
||||
continue
|
||||
|
||||
try:
|
||||
generate_json_documentation(data_from_json, markdown_output_filename)
|
||||
except Exception as e:
|
||||
print(f"Error al generar la documentación para {current_json_filename}: {e}")
|
||||
|
||||
print("\n--- Proceso de generación de descripciones Markdown completado ---")
|
||||
|
|
|
@ -3,7 +3,9 @@ import json
|
|||
from typing import List, Dict, Any
|
||||
import openpyxl # For Excel export
|
||||
from openpyxl.utils import get_column_letter
|
||||
|
||||
import sys
|
||||
import os
|
||||
import glob # Para buscar archivos JSON
|
||||
|
||||
script_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
|
@ -114,31 +116,51 @@ def main():
|
|||
working_dir = find_working_directory()
|
||||
print(f"Using working directory: {working_dir}")
|
||||
|
||||
json_input_filename = "parsed_s7_data.json" # Expected JSON input
|
||||
|
||||
try:
|
||||
with open(json_input_filename, 'r', encoding='utf-8') as f:
|
||||
data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{json_input_filename}' cargado correctamente.")
|
||||
except FileNotFoundError:
|
||||
print(f"Error: El archivo JSON de entrada '{json_input_filename}' no fue encontrado.")
|
||||
return
|
||||
except json.JSONDecodeError:
|
||||
print(f"Error: El archivo JSON '{json_input_filename}' no tiene un formato JSON válido.")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"Error al cargar/leer {json_input_filename}: {e}")
|
||||
input_json_dir = os.path.join(working_dir, "json")
|
||||
documentation_dir = os.path.join(working_dir, "documentation")
|
||||
os.makedirs(documentation_dir, exist_ok=True)
|
||||
print(f"Los archivos Excel de documentación se guardarán en: {documentation_dir}")
|
||||
|
||||
json_files_to_process = glob.glob(os.path.join(input_json_dir, "*.json"))
|
||||
|
||||
if not json_files_to_process:
|
||||
print(f"No se encontraron archivos .json en {input_json_dir}")
|
||||
return
|
||||
|
||||
if data_from_json.get("dbs"):
|
||||
for db_to_document in data_from_json["dbs"]:
|
||||
db_name_safe = db_to_document['name'].replace('"', '').replace(' ', '_').replace('/','_')
|
||||
excel_output_filename = f"documentation_db_{db_name_safe}.xlsx"
|
||||
|
||||
print(f"\nGenerando documentación Excel para DB: {db_to_document['name']}...")
|
||||
generate_excel_table(db_to_document, excel_output_filename)
|
||||
else:
|
||||
print("No se encontraron DBs en el archivo JSON para generar documentación.")
|
||||
print(f"Archivos JSON encontrados para procesar: {len(json_files_to_process)}")
|
||||
|
||||
for json_input_filepath in json_files_to_process:
|
||||
current_json_filename = os.path.basename(json_input_filepath)
|
||||
print(f"\n--- Procesando archivo JSON para Excel: {current_json_filename} ---")
|
||||
|
||||
try:
|
||||
with open(json_input_filepath, 'r', encoding='utf-8') as f:
|
||||
data_from_json = json.load(f)
|
||||
print(f"Archivo JSON '{current_json_filename}' cargado correctamente.")
|
||||
except FileNotFoundError:
|
||||
print(f"Error: El archivo JSON de entrada '{current_json_filename}' no fue encontrado en {json_input_filepath}.")
|
||||
continue
|
||||
except json.JSONDecodeError:
|
||||
print(f"Error: El archivo JSON '{current_json_filename}' no tiene un formato JSON válido.")
|
||||
continue
|
||||
except Exception as e:
|
||||
print(f"Error al cargar/leer {current_json_filename}: {e}")
|
||||
continue
|
||||
|
||||
if data_from_json.get("dbs"):
|
||||
for db_to_document in data_from_json["dbs"]:
|
||||
# Construir el path completo para el archivo Excel de salida
|
||||
excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}.xlsx")
|
||||
|
||||
print(f"Generando documentación Excel para DB: '{db_to_document['name']}' (desde {current_json_filename}) -> {excel_output_filename}")
|
||||
try:
|
||||
generate_excel_table(db_to_document, excel_output_filename)
|
||||
except Exception as e_excel:
|
||||
print(f"Error al generar Excel para DB '{db_to_document['name']}': {e_excel}")
|
||||
else:
|
||||
print(f"No se encontraron DBs en el archivo JSON '{current_json_filename}' para generar documentación Excel.")
|
||||
|
||||
print("\n--- Proceso de generación de documentación Excel completado ---")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -0,0 +1,413 @@
|
|||
import json
|
||||
import os
|
||||
import sys
|
||||
import glob
|
||||
import copy
|
||||
import re
|
||||
from typing import List, Dict, Optional, Tuple, Any
|
||||
|
||||
# Assuming x3.py and x4.py are in the same directory or accessible via PYTHONPATH
|
||||
# Imports from x3.py
|
||||
from x3 import (
|
||||
S7Parser,
|
||||
ParsedData, # Dataclass for top-level structure
|
||||
# The following dataclasses are defined in x3.py and used by S7Parser.
|
||||
# We might not need to import them explicitly if we work with dicts from JSON.
|
||||
# VariableInfo, ArrayDimension, UdtInfo, DbInfo,
|
||||
custom_json_serializer,
|
||||
find_working_directory,
|
||||
)
|
||||
|
||||
# Imports from x4.py (or reimplementations if direct import is problematic)
|
||||
# These functions from x4.py work on dictionary representations of the parsed data.
|
||||
from x4 import (
|
||||
format_data_type_for_source,
|
||||
generate_variable_declaration_for_source,
|
||||
generate_struct_members_for_source,
|
||||
generate_begin_block_assignments,
|
||||
generate_s7_source_code_lines,
|
||||
)
|
||||
|
||||
# --- Helper Functions ---
|
||||
|
||||
|
||||
def find_data_format_files(working_dir: str) -> Tuple[Optional[str], Optional[str]]:
|
||||
"""Finds _data and _format files in the working directory."""
|
||||
data_file: Optional[str] = None
|
||||
format_file: Optional[str] = None
|
||||
|
||||
extensions = ["*.db", "*.awl", "*.db.txt", "*.awl.txt"]
|
||||
all_s7_files = []
|
||||
for ext_pattern in extensions:
|
||||
all_s7_files.extend(glob.glob(os.path.join(working_dir, ext_pattern)))
|
||||
|
||||
# Prioritize longer extensions first for matching to avoid partial matches like .db when .db.txt exists
|
||||
all_s7_files.sort(key=len, reverse=True)
|
||||
|
||||
for f_path in all_s7_files:
|
||||
basename = os.path.basename(f_path)
|
||||
# Check for _data file (and ensure it's not an _updated file from a previous run)
|
||||
if "_data" in basename and "_updated" not in basename:
|
||||
# More specific check to avoid matching e.g. "some_other_data_related_file"
|
||||
# We expect "PREFIX_data.EXT"
|
||||
name_part, _ = os.path.splitext(
|
||||
basename
|
||||
) # For "file.db.txt", this gives "file.db"
|
||||
if name_part.endswith("_data") or basename.replace(
|
||||
os.path.splitext(basename)[-1], ""
|
||||
).endswith(
|
||||
"_data"
|
||||
): # Handles single and double extensions
|
||||
if data_file is None: # Take the first one found (after sorting)
|
||||
data_file = f_path
|
||||
|
||||
# Check for _format file
|
||||
if "_format" in basename and "_updated" not in basename:
|
||||
name_part, _ = os.path.splitext(basename)
|
||||
if name_part.endswith("_format") or basename.replace(
|
||||
os.path.splitext(basename)[-1], ""
|
||||
).endswith("_format"):
|
||||
if format_file is None:
|
||||
format_file = f_path
|
||||
|
||||
if data_file:
|
||||
print(f"Found _data file: {data_file}")
|
||||
else:
|
||||
print("Warning: No _data file found.")
|
||||
|
||||
if format_file:
|
||||
print(f"Found _format file: {format_file}")
|
||||
else:
|
||||
print("Warning: No _format file found.")
|
||||
|
||||
return data_file, format_file
|
||||
|
||||
|
||||
def parse_s7_to_json_file(s7_filepath: str, json_dir: str) -> Optional[str]:
|
||||
"""Parses an S7 source file to JSON and saves it."""
|
||||
parser = S7Parser()
|
||||
filename = os.path.basename(s7_filepath)
|
||||
print(f"Parsing S7 file: {filename}...")
|
||||
|
||||
try:
|
||||
parsed_result = parser.parse_file(s7_filepath)
|
||||
except Exception as e:
|
||||
print(f"Error parsing {filename}: {e}")
|
||||
return None
|
||||
|
||||
output_filename_base = os.path.splitext(filename)[0]
|
||||
# Handle double extensions like .db.txt
|
||||
if ".db" in output_filename_base or ".awl" in output_filename_base:
|
||||
# A more robust way to get the true base name before multiple extensions
|
||||
# Example: "file.db.txt" -> "file"
|
||||
# Example: "file.db" -> "file"
|
||||
temp_name = filename
|
||||
known_exts = [
|
||||
".txt",
|
||||
".db",
|
||||
".awl",
|
||||
] # order might matter if extensions can be part of name
|
||||
for k_ext in reversed(known_exts): # try removing from right to left
|
||||
if temp_name.lower().endswith(k_ext):
|
||||
temp_name = temp_name[: -len(k_ext)]
|
||||
output_filename_base = temp_name # This is the "true" base
|
||||
|
||||
json_output_filename = os.path.join(
|
||||
json_dir,
|
||||
f"{output_filename_base}_{os.path.basename(s7_filepath).split('_', 1)[1].split('.')[0]}.json",
|
||||
) # e.g. base_data.json
|
||||
|
||||
print(f"Serializing to JSON: {json_output_filename}")
|
||||
try:
|
||||
json_output = json.dumps(
|
||||
parsed_result, default=custom_json_serializer, indent=2
|
||||
)
|
||||
with open(json_output_filename, "w", encoding="utf-8") as f:
|
||||
f.write(json_output)
|
||||
print(f"JSON saved: {json_output_filename}")
|
||||
return json_output_filename
|
||||
except Exception as e:
|
||||
print(f"Error during JSON serialization or writing for {filename}: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def load_json_file(json_filepath: str) -> Optional[Dict[str, Any]]:
|
||||
"""Loads a JSON file into a Python dictionary."""
|
||||
try:
|
||||
with open(json_filepath, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
return data
|
||||
except Exception as e:
|
||||
print(f"Error loading JSON file {json_filepath}: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def flatten_db_variables_for_compare(
|
||||
members_list: List[Dict[str, Any]], parent_path: str = ""
|
||||
) -> List[Tuple[str, Dict[str, Any]]]:
|
||||
"""
|
||||
Flattens DB members for comparison.
|
||||
Collects all 'leaf' nodes (primitives, arrays of primitives, strings)
|
||||
and their full paths.
|
||||
"""
|
||||
flat_list = []
|
||||
for var_info in members_list:
|
||||
var_name_segment = var_info["name"]
|
||||
current_var_path = (
|
||||
f"{parent_path}{var_name_segment}" if parent_path else var_name_segment
|
||||
)
|
||||
|
||||
if var_info.get("children"):
|
||||
flat_list.extend(
|
||||
flatten_db_variables_for_compare(
|
||||
var_info["children"], f"{current_var_path}."
|
||||
)
|
||||
)
|
||||
else:
|
||||
flat_list.append((current_var_path, var_info))
|
||||
return flat_list
|
||||
|
||||
|
||||
def compare_db_structures(data_db: Dict[str, Any], format_db: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Compares the structure of two DBs (as dicts from JSON).
|
||||
Returns True if compatible, False otherwise.
|
||||
"""
|
||||
db_name = format_db.get("name", "UnknownDB")
|
||||
print(f"Comparing structure of DB: {db_name}")
|
||||
|
||||
flat_data_vars_with_paths = flatten_db_variables_for_compare(
|
||||
data_db.get("members", [])
|
||||
)
|
||||
flat_format_vars_with_paths = flatten_db_variables_for_compare(
|
||||
format_db.get("members", [])
|
||||
)
|
||||
|
||||
if len(flat_data_vars_with_paths) != len(flat_format_vars_with_paths):
|
||||
print(f"Error: DB '{db_name}' tiene un número diferente de variables expandidas (hoja).")
|
||||
print(f" Número de variables en archivo _data: {len(flat_data_vars_with_paths)}")
|
||||
print(f" Número de variables en archivo _format: {len(flat_format_vars_with_paths)}")
|
||||
|
||||
min_len = min(len(flat_data_vars_with_paths), len(flat_format_vars_with_paths))
|
||||
divergence_found_early = False
|
||||
# Revisar si hay un tipo de dato o nombre diferente antes del final de la lista más corta
|
||||
for k in range(min_len):
|
||||
path_data_k, var_data_k = flat_data_vars_with_paths[k]
|
||||
path_format_k, var_format_k = flat_format_vars_with_paths[k]
|
||||
|
||||
type_str_data_k = format_data_type_for_source(var_data_k)
|
||||
type_str_format_k = format_data_type_for_source(var_format_k)
|
||||
|
||||
# Comparamos tipos. Los nombres pueden diferir si la estructura interna de un UDT/Struct cambió.
|
||||
# La ruta del _data es aproximada si los nombres de los miembros de structs/UDTs cambiaron.
|
||||
if type_str_data_k != type_str_format_k:
|
||||
print(f" Adicionalmente, se encontró una discrepancia de tipo ANTES del final de la lista más corta (índice {k}):")
|
||||
print(f" _format variable: Path='{path_format_k}', Nombre='{var_format_k['name']}', Tipo='{type_str_format_k}'")
|
||||
print(f" _data variable: Path='{path_data_k}' (aprox.), Nombre='{var_data_k['name']}', Tipo='{type_str_data_k}'")
|
||||
divergence_found_early = True
|
||||
break
|
||||
|
||||
if not divergence_found_early:
|
||||
# Si no hubo discrepancias tempranas, la diferencia es por variables extra al final.
|
||||
if len(flat_data_vars_with_paths) > len(flat_format_vars_with_paths):
|
||||
print(f" El archivo _data tiene {len(flat_data_vars_with_paths) - min_len} variable(s) más.")
|
||||
print(f" Primeras variables extra en _data (path, nombre, tipo) desde el índice {min_len}:")
|
||||
for j in range(min_len, min(min_len + 5, len(flat_data_vars_with_paths))): # Mostrar hasta 5 extra
|
||||
path, var = flat_data_vars_with_paths[j]
|
||||
print(f" - Path: '{path}', Nombre: '{var['name']}', Tipo: '{format_data_type_for_source(var)}'")
|
||||
else:
|
||||
print(f" El archivo _format tiene {len(flat_format_vars_with_paths) - min_len} variable(s) más.")
|
||||
print(f" Primeras variables extra en _format (path, nombre, tipo) desde el índice {min_len}:")
|
||||
for j in range(min_len, min(min_len + 5, len(flat_format_vars_with_paths))): # Mostrar hasta 5 extra
|
||||
path, var = flat_format_vars_with_paths[j]
|
||||
print(f" - Path: '{path}', Nombre: '{var['name']}', Tipo: '{format_data_type_for_source(var)}'")
|
||||
return False
|
||||
|
||||
for i in range(len(flat_format_vars_with_paths)):
|
||||
path_data, var_data = flat_data_vars_with_paths[i]
|
||||
path_format, var_format = flat_format_vars_with_paths[i]
|
||||
|
||||
type_str_data = format_data_type_for_source(var_data)
|
||||
type_str_format = format_data_type_for_source(var_format)
|
||||
|
||||
if type_str_data != type_str_format:
|
||||
print(f"Error: Discrepancia de tipo en DB '{db_name}' para la variable en el índice {i} (contando desde 0) de la lista expandida.")
|
||||
print(f" Comparando:")
|
||||
print(f" _format variable: Path='{path_format}', Nombre='{var_format['name']}', Tipo Declarado='{type_str_format}'")
|
||||
print(f" Offset: {var_format.get('byte_offset')}, Tamaño: {var_format.get('size_in_bytes')} bytes")
|
||||
print(f" _data variable: Path='{path_data}' (aprox.), Nombre='{var_data['name']}', Tipo Declarado='{type_str_data}'")
|
||||
print(f" Offset: {var_data.get('byte_offset')}, Tamaño: {var_data.get('size_in_bytes')} bytes")
|
||||
return False
|
||||
|
||||
print(f"La estructura del DB '{db_name}' es compatible.")
|
||||
return True
|
||||
|
||||
|
||||
def update_format_db_members_recursive(
|
||||
format_members: List[Dict[str, Any]], data_members: List[Dict[str, Any]]
|
||||
):
|
||||
"""
|
||||
Recursively updates 'initial_value', 'current_value', and 'current_element_values'
|
||||
in format_members using values from data_members.
|
||||
Assumes structures are compatible and lists have the same length.
|
||||
"""
|
||||
for i in range(len(format_members)):
|
||||
fm_var = format_members[i]
|
||||
dm_var = data_members[i]
|
||||
|
||||
fm_var["initial_value"] = dm_var.get("initial_value")
|
||||
fm_var["current_value"] = dm_var.get("current_value")
|
||||
if "current_element_values" in dm_var:
|
||||
fm_var["current_element_values"] = dm_var["current_element_values"]
|
||||
elif "current_element_values" in fm_var:
|
||||
del fm_var["current_element_values"]
|
||||
|
||||
if fm_var.get("children") and dm_var.get("children"):
|
||||
if len(fm_var["children"]) == len(dm_var["children"]):
|
||||
update_format_db_members_recursive(
|
||||
fm_var["children"], dm_var["children"]
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"Warning: Mismatch in children count for {fm_var['name']} during update. This is unexpected."
|
||||
)
|
||||
|
||||
|
||||
def get_updated_filename(format_filename_basename: str) -> str:
|
||||
"""Generates the output filename for the _updated file."""
|
||||
suffixes_map = {
|
||||
"_format.db.txt": "_updated.db.txt",
|
||||
"_format.awl.txt": "_updated.awl.txt",
|
||||
"_format.db": "_updated.db",
|
||||
"_format.awl": "_updated.awl",
|
||||
}
|
||||
|
||||
for s_format, s_updated in suffixes_map.items():
|
||||
if format_filename_basename.lower().endswith(s_format.lower()):
|
||||
base = format_filename_basename[: -len(s_format)]
|
||||
return base + s_updated
|
||||
|
||||
if "_format" in format_filename_basename:
|
||||
return format_filename_basename.replace("_format", "_updated")
|
||||
|
||||
name, ext = os.path.splitext(format_filename_basename)
|
||||
return f"{name}_updated{ext}"
|
||||
|
||||
|
||||
# --- Main Script Logic ---
|
||||
def main():
|
||||
working_dir = find_working_directory()
|
||||
print(f"Using working directory: {working_dir}")
|
||||
|
||||
data_s7_filepath, format_s7_filepath = find_data_format_files(working_dir)
|
||||
|
||||
if not data_s7_filepath or not format_s7_filepath:
|
||||
print(
|
||||
"Error: Both _data and _format S7 source files must be present. Aborting."
|
||||
)
|
||||
return
|
||||
|
||||
json_dir = os.path.join(working_dir, "json")
|
||||
os.makedirs(json_dir, exist_ok=True)
|
||||
|
||||
data_json_filepath = parse_s7_to_json_file(data_s7_filepath, json_dir)
|
||||
if not data_json_filepath:
|
||||
print("Failed to parse _data file. Aborting.")
|
||||
return
|
||||
data_parsed_dict = load_json_file(data_json_filepath)
|
||||
if not data_parsed_dict:
|
||||
print("Failed to load _data JSON. Aborting.")
|
||||
return
|
||||
|
||||
format_json_filepath = parse_s7_to_json_file(format_s7_filepath, json_dir)
|
||||
if not format_json_filepath:
|
||||
print("Failed to parse _format file. Aborting.")
|
||||
return
|
||||
format_parsed_dict = load_json_file(format_json_filepath)
|
||||
if not format_parsed_dict:
|
||||
print("Failed to load _format JSON. Aborting.")
|
||||
return
|
||||
|
||||
data_dbs = data_parsed_dict.get("dbs", [])
|
||||
format_dbs = format_parsed_dict.get("dbs", [])
|
||||
|
||||
if not format_dbs:
|
||||
print("No Data Blocks found in the _format file. Nothing to update. Aborting.")
|
||||
return
|
||||
|
||||
if len(data_dbs) != len(format_dbs):
|
||||
print(
|
||||
f"Error: Mismatch in the number of Data Blocks. "
|
||||
f"_data file has {len(data_dbs)} DBs, _format file has {len(format_dbs)} DBs. Aborting."
|
||||
)
|
||||
return
|
||||
|
||||
all_dbs_compatible = True
|
||||
for i in range(len(format_dbs)):
|
||||
current_format_db = format_dbs[i]
|
||||
current_data_db = next(
|
||||
(db for db in data_dbs if db["name"] == current_format_db["name"]), None
|
||||
)
|
||||
|
||||
if not current_data_db:
|
||||
print(
|
||||
f"Error: DB '{current_format_db['name']}' from _format file not found in _data file. Aborting."
|
||||
)
|
||||
all_dbs_compatible = False
|
||||
break
|
||||
|
||||
if not compare_db_structures(current_data_db, current_format_db):
|
||||
all_dbs_compatible = False
|
||||
break
|
||||
|
||||
if not all_dbs_compatible:
|
||||
print("Comparison failed. Aborting generation of _updated file.")
|
||||
return
|
||||
|
||||
print("\nAll DB structures are compatible. Proceeding to generate _updated file.")
|
||||
|
||||
updated_parsed_dict = copy.deepcopy(format_parsed_dict)
|
||||
updated_parsed_dict["udts"] = format_parsed_dict.get("udts", [])
|
||||
updated_dbs_list = updated_parsed_dict.get("dbs", [])
|
||||
|
||||
for i in range(len(updated_dbs_list)):
|
||||
updated_db_ref = updated_dbs_list[i]
|
||||
data_db_original = next(
|
||||
(db for db in data_dbs if db["name"] == updated_db_ref["name"]), None
|
||||
)
|
||||
if not data_db_original:
|
||||
print(
|
||||
f"Critical Error: Could not find data DB {updated_db_ref['name']} during update phase. Aborting."
|
||||
)
|
||||
return
|
||||
|
||||
if "members" in updated_db_ref and "members" in data_db_original:
|
||||
update_format_db_members_recursive(
|
||||
updated_db_ref["members"], data_db_original["members"]
|
||||
)
|
||||
|
||||
updated_db_ref["_begin_block_assignments_ordered"] = data_db_original.get(
|
||||
"_begin_block_assignments_ordered", []
|
||||
)
|
||||
updated_db_ref["_initial_values_from_begin_block"] = data_db_original.get(
|
||||
"_initial_values_from_begin_block", {}
|
||||
)
|
||||
|
||||
s7_output_lines = generate_s7_source_code_lines(updated_parsed_dict)
|
||||
|
||||
output_s7_filename_basename = get_updated_filename(
|
||||
os.path.basename(format_s7_filepath)
|
||||
)
|
||||
output_s7_filepath = os.path.join(working_dir, output_s7_filename_basename)
|
||||
|
||||
try:
|
||||
with open(output_s7_filepath, "w", encoding="utf-8") as f:
|
||||
for line in s7_output_lines:
|
||||
f.write(line + "\n")
|
||||
print(f"\nSuccessfully generated _updated S7 file: {output_s7_filepath}")
|
||||
except Exception as e:
|
||||
print(f"Error writing _updated S7 file {output_s7_filepath}: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
49
data/log.txt
49
data/log.txt
|
@ -1,32 +1,17 @@
|
|||
[21:30:30] Iniciando ejecución de x3.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
||||
[21:30:30] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
[21:30:30] Los archivos JSON de salida se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||
[21:30:30] Archivos encontrados para procesar: 3
|
||||
[21:30:30] --- Procesando archivo: db1001_data.db ---
|
||||
[21:30:30] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
||||
[21:30:30] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
||||
[21:30:30] --- Procesando archivo: db1001_format.db ---
|
||||
[21:30:30] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||
[21:30:30] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||
[21:30:30] --- Procesando archivo: db1001_format_updated.db ---
|
||||
[21:30:30] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
||||
[21:30:30] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
||||
[21:30:30] --- Proceso completado ---
|
||||
[21:30:30] Ejecución de x3.py finalizada (success). Duración: 0:00:00.145553.
|
||||
[21:30:30] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x3.txt
|
||||
[21:31:24] Iniciando ejecución de x3.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
||||
[21:31:24] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
[21:31:24] Los archivos JSON de salida se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||
[21:31:24] Archivos encontrados para procesar: 3
|
||||
[21:31:24] --- Procesando archivo: db1001_data.db ---
|
||||
[21:31:24] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
||||
[21:31:25] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data.json
|
||||
[21:31:25] --- Procesando archivo: db1001_format.db ---
|
||||
[21:31:25] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||
[21:31:25] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format.json
|
||||
[21:31:25] --- Procesando archivo: db1001_format_updated.db ---
|
||||
[21:31:25] Parseo completo. Intentando serializar a JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
||||
[21:31:25] Resultado guardado en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_updated.json
|
||||
[21:31:25] --- Proceso completado ---
|
||||
[21:31:25] Ejecución de x3.py finalizada (success). Duración: 0:00:00.136451.
|
||||
[21:31:25] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x3.txt
|
||||
[23:48:43] Iniciando ejecución de x7_value_updater.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
||||
[23:48:43] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||
[23:48:43] Found _data file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_data.db
|
||||
[23:48:43] Found _format file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_format.db
|
||||
[23:48:43] Parsing S7 file: db1001_data.db...
|
||||
[23:48:43] Serializing to JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data_data.json
|
||||
[23:48:43] JSON saved: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_data_data.json
|
||||
[23:48:43] Parsing S7 file: db1001_format.db...
|
||||
[23:48:43] Serializing to JSON: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_format.json
|
||||
[23:48:43] JSON saved: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_format_format.json
|
||||
[23:48:43] Comparing structure of DB: HMI_Blender_Parameters
|
||||
[23:48:43] La estructura del DB 'HMI_Blender_Parameters' es compatible.
|
||||
[23:48:43] All DB structures are compatible. Proceeding to generate _updated file.
|
||||
[23:48:43] INFO: Usando '_begin_block_assignments_ordered' para generar bloque BEGIN de DB 'HMI_Blender_Parameters'.
|
||||
[23:48:43] Successfully generated _updated S7 file: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||
[23:48:43] Ejecución de x7_value_updater.py finalizada (success). Duración: 0:00:00.106052.
|
||||
[23:48:43] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x7_value_updater.txt
|
||||
|
|
Loading…
Reference in New Issue