Tratando de lograr el el excel de comparacion de x7 funcione correctamente
This commit is contained in:
parent
0f162377cd
commit
89451abd15
|
@ -1,15 +1,15 @@
|
||||||
--- Log de Ejecución: x4.py ---
|
--- Log de Ejecución: x4.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-18 02:13:16
|
Inicio: 2025-05-18 13:15:28
|
||||||
Fin: 2025-05-18 02:13:16
|
Fin: 2025-05-18 13:15:28
|
||||||
Duración: 0:00:00.162328
|
Duración: 0:00:00.188819
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
Los archivos de documentación generados se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Archivos JSON encontrados para procesar: 2
|
Archivos JSON encontrados para procesar: 3
|
||||||
|
|
||||||
--- Procesando archivo JSON: db1001_data.json ---
|
--- Procesando archivo JSON: db1001_data.json ---
|
||||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||||
|
@ -21,6 +21,11 @@ Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||||
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.txt
|
||||||
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.md
|
||||||
|
|
||||||
|
--- Procesando archivo JSON: db1001_updated.json ---
|
||||||
|
Archivo JSON 'db1001_updated.json' cargado correctamente.
|
||||||
|
Archivo S7 reconstruido generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
||||||
|
Archivo Markdown de documentación generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
||||||
|
|
||||||
--- Proceso de generación de documentación completado ---
|
--- Proceso de generación de documentación completado ---
|
||||||
|
|
||||||
--- ERRORES (STDERR) ---
|
--- ERRORES (STDERR) ---
|
||||||
|
|
|
@ -1,25 +1,30 @@
|
||||||
--- Log de Ejecución: x6.py ---
|
--- Log de Ejecución: x6.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-18 02:20:21
|
Inicio: 2025-05-18 12:06:45
|
||||||
Fin: 2025-05-18 02:20:22
|
Fin: 2025-05-18 12:06:46
|
||||||
Duración: 0:00:01.130771
|
Duración: 0:00:00.564906
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Los archivos Excel de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
Los archivos Excel de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
Archivos JSON encontrados para procesar: 2
|
Archivos JSON encontrados para procesar: 3
|
||||||
|
|
||||||
--- Procesando archivo JSON para Excel: db1001_data.json ---
|
--- Procesando archivo JSON para Excel: db1001_data.json ---
|
||||||
Archivo JSON 'db1001_data.json' cargado correctamente.
|
Archivo JSON 'db1001_data.json' cargado correctamente.
|
||||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_data.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json_HMI_Blender_Parameters.xlsx
|
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_data.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
||||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json_HMI_Blender_Parameters.xlsx
|
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_data.json.xlsx
|
||||||
|
|
||||||
--- Procesando archivo JSON para Excel: db1001_format.json ---
|
--- Procesando archivo JSON para Excel: db1001_format.json ---
|
||||||
Archivo JSON 'db1001_format.json' cargado correctamente.
|
Archivo JSON 'db1001_format.json' cargado correctamente.
|
||||||
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json_HMI_Blender_Parameters.xlsx
|
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_format.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
||||||
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json_HMI_Blender_Parameters.xlsx
|
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_format.json.xlsx
|
||||||
|
|
||||||
|
--- Procesando archivo JSON para Excel: db1001_updated.json ---
|
||||||
|
Archivo JSON 'db1001_updated.json' cargado correctamente.
|
||||||
|
Generando documentación Excel para DB: 'HMI_Blender_Parameters' (desde db1001_updated.json) -> C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.json.xlsx
|
||||||
|
Excel documentation generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.json.xlsx
|
||||||
|
|
||||||
--- Proceso de generación de documentación Excel completado ---
|
--- Proceso de generación de documentación Excel completado ---
|
||||||
|
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
--- Log de Ejecución: x7_value_updater.py ---
|
--- Log de Ejecución: x7_value_updater.py ---
|
||||||
Grupo: S7_DB_Utils
|
Grupo: S7_DB_Utils
|
||||||
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
Directorio de Trabajo: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
Inicio: 2025-05-18 02:56:24
|
Inicio: 2025-05-18 13:21:37
|
||||||
Fin: 2025-05-18 02:56:25
|
Fin: 2025-05-18 13:21:38
|
||||||
Duración: 0:00:00.761362
|
Duración: 0:00:01.043746
|
||||||
Estado: SUCCESS (Código de Salida: 0)
|
Estado: SUCCESS (Código de Salida: 0)
|
||||||
|
|
||||||
--- SALIDA ESTÁNDAR (STDOUT) ---
|
--- SALIDA ESTÁNDAR (STDOUT) ---
|
||||||
|
@ -22,7 +22,7 @@ Comparando estructuras para DB 'HMI_Blender_Parameters': 284 variables en _data,
|
||||||
|
|
||||||
Los archivos son compatibles. Creando el archivo _updated...
|
Los archivos son compatibles. Creando el archivo _updated...
|
||||||
Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
||||||
Archivo de comparación Excel generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
Comparison Excel file generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
||||||
Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
||||||
Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
||||||
Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||||
|
|
|
@ -30,19 +30,19 @@
|
||||||
"hidden": false
|
"hidden": false
|
||||||
},
|
},
|
||||||
"x5.py": {
|
"x5.py": {
|
||||||
"display_name": "05: Generar Descripción MD del JSON",
|
"display_name": "05: Generar Descripción MD",
|
||||||
"short_description": "Genera documentación descriptiva de archivos JSON en Markdown.",
|
"short_description": "Genera documentación descriptiva de archivos JSON en Markdown.",
|
||||||
"long_description": "Crea un archivo Markdown que documenta la estructura interna de los archivos JSON (generados por x3.py). Detalla UDTs y DBs, incluyendo sus miembros, offsets, tipos de datos, y valores iniciales/actuales, facilitando la comprensión del contenido del JSON.",
|
"long_description": "Crea un archivo Markdown que documenta la estructura interna de los archivos JSON (generados por x3.py). Detalla UDTs y DBs, incluyendo sus miembros, offsets, tipos de datos, y valores iniciales/actuales, facilitando la comprensión del contenido del JSON.",
|
||||||
"hidden": false
|
"hidden": false
|
||||||
},
|
},
|
||||||
"x6.py": {
|
"x6.py": {
|
||||||
"display_name": "06: Generar Excel desde JSON",
|
"display_name": "06: Generar Excel",
|
||||||
"short_description": "Genera documentación de DBs en formato Excel (.xlsx) desde JSON.",
|
"short_description": "Genera documentación de DBs en formato Excel (.xlsx) desde JSON.",
|
||||||
"long_description": "Procesa archivos JSON (generados por x3.py) y exporta la información de cada Bloque de Datos (DB) a un archivo Excel (.xlsx). La hoja de cálculo incluye detalles como direcciones, nombres de variables, tipos de datos, valores iniciales, valores actuales y comentarios.",
|
"long_description": "Procesa archivos JSON (generados por x3.py) y exporta la información de cada Bloque de Datos (DB) a un archivo Excel (.xlsx). La hoja de cálculo incluye detalles como direcciones, nombres de variables, tipos de datos, valores iniciales, valores actuales y comentarios.",
|
||||||
"hidden": false
|
"hidden": false
|
||||||
},
|
},
|
||||||
"x7_value_updater.py": {
|
"x7_value_updater.py": {
|
||||||
"display_name": "07: Actualizar Valores de DB (JSON)",
|
"display_name": "07: Actualizar Valores data+format->updated",
|
||||||
"short_description": "Busca archivos .db o .awl con la terminacion _data y _format. Si los encuentra y son compatibles usa los datos de _data para generar un _updated con los nombres de las variables de _format",
|
"short_description": "Busca archivos .db o .awl con la terminacion _data y _format. Si los encuentra y son compatibles usa los datos de _data para generar un _updated con los nombres de las variables de _format",
|
||||||
"long_description": "Procesa pares de archivos a JSON (_data.json y _format.json, generados por x3.py). Compara sus estructuras por offset para asegurar compatibilidad. Si son compatibles, crea un nuevo archivo _updated.json que combina la estructura del _format.json con los valores actuales del _data.json.",
|
"long_description": "Procesa pares de archivos a JSON (_data.json y _format.json, generados por x3.py). Compara sus estructuras por offset para asegurar compatibilidad. Si son compatibles, crea un nuevo archivo _updated.json que combina la estructura del _format.json con los valores actuales del _data.json.",
|
||||||
"hidden": false
|
"hidden": false
|
||||||
|
|
|
@ -47,7 +47,8 @@ class VariableInfo:
|
||||||
comment: Optional[str] = None
|
comment: Optional[str] = None
|
||||||
children: List['VariableInfo'] = field(default_factory=list)
|
children: List['VariableInfo'] = field(default_factory=list)
|
||||||
is_udt_expanded_member: bool = False
|
is_udt_expanded_member: bool = False
|
||||||
current_element_values: Optional[Dict[str, str]] = None
|
current_element_values: Optional[Dict[str, str]] = None
|
||||||
|
element_type: str = "SIMPLE_VAR" # New field with default value
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class UdtInfo:
|
class UdtInfo:
|
||||||
|
@ -147,9 +148,9 @@ class S7Parser:
|
||||||
S7Parser._adjust_children_offsets(child.children, base_offset_add)
|
S7Parser._adjust_children_offsets(child.children, base_offset_add)
|
||||||
|
|
||||||
def _parse_struct_members(self, lines: List[str], current_line_idx: int,
|
def _parse_struct_members(self, lines: List[str], current_line_idx: int,
|
||||||
parent_members_list: List[VariableInfo],
|
parent_members_list: List[VariableInfo],
|
||||||
active_context: OffsetContext,
|
active_context: OffsetContext,
|
||||||
is_top_level_struct_in_block: bool = False) -> int:
|
is_top_level_struct_in_block: bool = False) -> int:
|
||||||
idx_to_process = current_line_idx
|
idx_to_process = current_line_idx
|
||||||
while idx_to_process < len(lines):
|
while idx_to_process < len(lines):
|
||||||
original_line_text = lines[idx_to_process].strip()
|
original_line_text = lines[idx_to_process].strip()
|
||||||
|
@ -166,9 +167,9 @@ class S7Parser:
|
||||||
is_nested_end_struct = self.end_struct_regex.match(line_to_parse) and not is_top_level_struct_in_block
|
is_nested_end_struct = self.end_struct_regex.match(line_to_parse) and not is_top_level_struct_in_block
|
||||||
is_main_block_end_struct = self.end_struct_regex.match(line_to_parse) and is_top_level_struct_in_block
|
is_main_block_end_struct = self.end_struct_regex.match(line_to_parse) and is_top_level_struct_in_block
|
||||||
is_block_terminator = is_top_level_struct_in_block and \
|
is_block_terminator = is_top_level_struct_in_block and \
|
||||||
(self.end_type_regex.match(line_to_parse) or \
|
(self.end_type_regex.match(line_to_parse) or \
|
||||||
self.end_db_regex.match(line_to_parse) or \
|
self.end_db_regex.match(line_to_parse) or \
|
||||||
self.begin_regex.match(line_to_parse))
|
self.begin_regex.match(line_to_parse))
|
||||||
|
|
||||||
if is_nested_end_struct:
|
if is_nested_end_struct:
|
||||||
active_context.align_to_byte()
|
active_context.align_to_byte()
|
||||||
|
@ -191,6 +192,17 @@ class S7Parser:
|
||||||
data_type=clean_data_type,
|
data_type=clean_data_type,
|
||||||
byte_offset=0, size_in_bytes=0,
|
byte_offset=0, size_in_bytes=0,
|
||||||
udt_source_name=udt_source_name_val)
|
udt_source_name=udt_source_name_val)
|
||||||
|
|
||||||
|
# Set element_type based on what we know about the variable
|
||||||
|
if var_data['arraydims']:
|
||||||
|
var_info.element_type = "ARRAY"
|
||||||
|
elif clean_data_type.upper() == "STRUCT":
|
||||||
|
var_info.element_type = "STRUCT"
|
||||||
|
elif udt_source_name_val:
|
||||||
|
var_info.element_type = "UDT_INSTANCE"
|
||||||
|
else:
|
||||||
|
var_info.element_type = "SIMPLE_VAR"
|
||||||
|
|
||||||
if var_data.get('initval'): var_info.initial_value = var_data['initval'].strip()
|
if var_data.get('initval'): var_info.initial_value = var_data['initval'].strip()
|
||||||
if line_comment: var_info.comment = line_comment
|
if line_comment: var_info.comment = line_comment
|
||||||
num_array_elements = 1
|
num_array_elements = 1
|
||||||
|
@ -245,10 +257,10 @@ class S7Parser:
|
||||||
var_info.children.append(expanded_member)
|
var_info.children.append(expanded_member)
|
||||||
parent_members_list.append(var_info)
|
parent_members_list.append(var_info)
|
||||||
elif line_to_parse and \
|
elif line_to_parse and \
|
||||||
not self.struct_start_regex.match(line_to_parse) and \
|
not self.struct_start_regex.match(line_to_parse) and \
|
||||||
not is_main_block_end_struct and \
|
not is_main_block_end_struct and \
|
||||||
not is_nested_end_struct and \
|
not is_nested_end_struct and \
|
||||||
not is_block_terminator :
|
not is_block_terminator :
|
||||||
print(f"DEBUG (_parse_struct_members): Line not parsed: Original='{original_line_text}' | Processed='{line_to_parse}'")
|
print(f"DEBUG (_parse_struct_members): Line not parsed: Original='{original_line_text}' | Processed='{line_to_parse}'")
|
||||||
return idx_to_process
|
return idx_to_process
|
||||||
|
|
||||||
|
@ -635,71 +647,84 @@ def calculate_array_element_offset(var: VariableInfo, indices_str: str) -> float
|
||||||
|
|
||||||
def flatten_db_structure(db_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
def flatten_db_structure(db_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
"""
|
"""
|
||||||
Función genérica que aplana completamente una estructura de DB/UDT,
|
Function that completely flattens a DB/UDT structure,
|
||||||
expandiendo todas las variables anidadas, UDTs y elementos de array.
|
expanding all nested variables, UDTs, and array elements.
|
||||||
Garantiza ordenamiento estricto por offset (byte.bit).
|
Ensures strict ordering by offset (byte.bit).
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List[Dict]: Lista de variables aplanadas con todos sus atributos
|
List[Dict]: Flattened list of variables with all attributes
|
||||||
y un path completo, ordenada por offset estricto.
|
and a complete path, strictly ordered by offset.
|
||||||
"""
|
"""
|
||||||
flat_variables = []
|
flat_variables = []
|
||||||
processed_ids = set() # Para evitar duplicados
|
processed_ids = set() # To avoid duplicates
|
||||||
|
|
||||||
def process_variable(var: Dict[str, Any], path_prefix: str = "", is_expansion: bool = False):
|
def process_variable(var: Dict[str, Any], path_prefix: str = "", is_expansion: bool = False):
|
||||||
# Identificador único para esta variable en este contexto
|
# Unique identifier for this variable in this context
|
||||||
var_id = f"{path_prefix}{var['name']}_{var['byte_offset']}"
|
var_id = f"{path_prefix}{var['name']}_{var['byte_offset']}"
|
||||||
|
|
||||||
# Evitar procesar duplicados (como miembros expandidos de UDTs)
|
# Avoid processing duplicates (like expanded UDT members)
|
||||||
if is_expansion and var_id in processed_ids:
|
if is_expansion and var_id in processed_ids:
|
||||||
return
|
return
|
||||||
if is_expansion:
|
if is_expansion:
|
||||||
processed_ids.add(var_id)
|
processed_ids.add(var_id)
|
||||||
|
|
||||||
# Crear copia de la variable con path completo
|
# Create copy of the variable with complete path
|
||||||
flat_var = var.copy()
|
flat_var = var.copy()
|
||||||
flat_var["full_path"] = f"{path_prefix}{var['name']}"
|
flat_var["full_path"] = f"{path_prefix}{var['name']}"
|
||||||
flat_var["is_array_element"] = False # Por defecto no es elemento de array
|
flat_var["is_array_element"] = False # Default is not an array element
|
||||||
|
|
||||||
# Determinar si es array con valores específicos
|
# Preserve or infer element_type
|
||||||
|
if "element_type" not in flat_var:
|
||||||
|
# Infer type for backward compatibility
|
||||||
|
if var.get("array_dimensions"):
|
||||||
|
flat_var["element_type"] = "ARRAY"
|
||||||
|
elif var.get("children") and var["data_type"].upper() == "STRUCT":
|
||||||
|
flat_var["element_type"] = "STRUCT"
|
||||||
|
elif var.get("udt_source_name"):
|
||||||
|
flat_var["element_type"] = "UDT_INSTANCE"
|
||||||
|
else:
|
||||||
|
flat_var["element_type"] = "SIMPLE_VAR"
|
||||||
|
|
||||||
|
# Determine if it's an array with specific values
|
||||||
is_array = bool(var.get("array_dimensions"))
|
is_array = bool(var.get("array_dimensions"))
|
||||||
has_array_values = is_array and var.get("current_element_values")
|
has_array_values = is_array and var.get("current_element_values")
|
||||||
|
|
||||||
# Si no es un array con valores específicos, agregar la variable base
|
# If not an array with specific values, add the base variable
|
||||||
if not has_array_values:
|
if not has_array_values:
|
||||||
# Asegurarse de que el offset esté en el formato correcto
|
# Ensure the offset is in the correct format
|
||||||
flat_var["address_display"] = format_address_for_display(var["byte_offset"], var.get("bit_size", 0))
|
flat_var["address_display"] = format_address_for_display(var["byte_offset"], var.get("bit_size", 0))
|
||||||
flat_variables.append(flat_var)
|
flat_variables.append(flat_var)
|
||||||
|
|
||||||
# Si es array con valores específicos, expandir cada elemento como variable individual
|
# If it's an array with specific values, expand each element as individual variable
|
||||||
if has_array_values:
|
if has_array_values:
|
||||||
for idx, element_data in var.get("current_element_values", {}).items():
|
for idx, element_data in var.get("current_element_values", {}).items():
|
||||||
# Extraer valor y offset del elemento
|
# Extract value and offset of the element
|
||||||
if isinstance(element_data, dict) and "value" in element_data and "offset" in element_data:
|
if isinstance(element_data, dict) and "value" in element_data and "offset" in element_data:
|
||||||
# Nuevo formato con offset calculado
|
# New format with calculated offset
|
||||||
value = element_data["value"]
|
value = element_data["value"]
|
||||||
element_offset = element_data["offset"]
|
element_offset = element_data["offset"]
|
||||||
else:
|
else:
|
||||||
# Compatibilidad con formato antiguo
|
# Compatibility with old format
|
||||||
value = element_data
|
value = element_data
|
||||||
element_offset = var["byte_offset"] # Offset base
|
element_offset = var["byte_offset"] # Base offset
|
||||||
|
|
||||||
# Crear una entrada por cada elemento del array
|
# Create an entry for each array element
|
||||||
array_element = var.copy()
|
array_element = var.copy()
|
||||||
array_element["full_path"] = f"{path_prefix}{var['name']}[{idx}]"
|
array_element["full_path"] = f"{path_prefix}{var['name']}[{idx}]"
|
||||||
array_element["is_array_element"] = True
|
array_element["is_array_element"] = True
|
||||||
array_element["array_index"] = idx
|
array_element["array_index"] = idx
|
||||||
array_element["current_value"] = value
|
array_element["current_value"] = value
|
||||||
array_element["byte_offset"] = element_offset # Usar offset calculado
|
array_element["byte_offset"] = element_offset # Use calculated offset
|
||||||
array_element["address_display"] = format_address_for_display(element_offset, var.get("bit_size", 0))
|
array_element["address_display"] = format_address_for_display(element_offset, var.get("bit_size", 0))
|
||||||
|
array_element["element_type"] = "ARRAY_ELEMENT"
|
||||||
|
|
||||||
# Eliminar current_element_values para evitar redundancia
|
# Remove current_element_values to avoid redundancy
|
||||||
if "current_element_values" in array_element:
|
if "current_element_values" in array_element:
|
||||||
del array_element["current_element_values"]
|
del array_element["current_element_values"]
|
||||||
|
|
||||||
flat_variables.append(array_element)
|
flat_variables.append(array_element)
|
||||||
|
|
||||||
# Procesar recursivamente todos los hijos
|
# Process all children recursively
|
||||||
if var.get("children"):
|
if var.get("children"):
|
||||||
for child in var.get("children", []):
|
for child in var.get("children", []):
|
||||||
process_variable(
|
process_variable(
|
||||||
|
@ -708,11 +733,11 @@ def flatten_db_structure(db_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
is_expansion=bool(var.get("udt_source_name"))
|
is_expansion=bool(var.get("udt_source_name"))
|
||||||
)
|
)
|
||||||
|
|
||||||
# Procesar todos los miembros desde el nivel superior
|
# Process all members from the top level
|
||||||
for member in db_info.get("members", []):
|
for member in db_info.get("members", []):
|
||||||
process_variable(member)
|
process_variable(member)
|
||||||
|
|
||||||
# Ordenar estrictamente por offset byte.bit
|
# Sort strictly by offset byte.bit
|
||||||
flat_variables.sort(key=lambda x: (
|
flat_variables.sort(key=lambda x: (
|
||||||
int(x["byte_offset"]),
|
int(x["byte_offset"]),
|
||||||
int(round((x["byte_offset"] - int(x["byte_offset"])) * 10))
|
int(round((x["byte_offset"] - int(x["byte_offset"])) * 10))
|
||||||
|
|
|
@ -147,8 +147,9 @@ def main():
|
||||||
|
|
||||||
if data_from_json.get("dbs"):
|
if data_from_json.get("dbs"):
|
||||||
for db_to_document in data_from_json["dbs"]:
|
for db_to_document in data_from_json["dbs"]:
|
||||||
excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}_{db_to_document['name'].replace('"', '')}.xlsx")
|
## excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}_{db_to_document['name'].replace('"', '')}.xlsx")
|
||||||
|
excel_output_filename = os.path.join(documentation_dir, f"{current_json_filename}.xlsx")
|
||||||
|
|
||||||
print(f"Generando documentación Excel para DB: '{db_to_document['name']}' (desde {current_json_filename}) -> {excel_output_filename}")
|
print(f"Generando documentación Excel para DB: '{db_to_document['name']}' (desde {current_json_filename}) -> {excel_output_filename}")
|
||||||
try:
|
try:
|
||||||
generate_excel_table(db_to_document, excel_output_filename)
|
generate_excel_table(db_to_document, excel_output_filename)
|
||||||
|
|
|
@ -132,174 +132,6 @@ def compare_structures_by_offset(data_vars: List[Dict], format_vars: List[Dict])
|
||||||
return len(issues) == 0, issues
|
return len(issues) == 0, issues
|
||||||
|
|
||||||
|
|
||||||
def create_updated_json(data_json: Dict, format_json: Dict) -> Dict:
|
|
||||||
"""
|
|
||||||
Crea JSON actualizado basado en la estructura de _format con valores de _data.
|
|
||||||
Utiliza offset como clave principal para encontrar variables correspondientes.
|
|
||||||
Reporta errores si no se encuentra un offset correspondiente.
|
|
||||||
"""
|
|
||||||
# Copia profunda de format_json para no modificar el original
|
|
||||||
updated_json = copy.deepcopy(format_json)
|
|
||||||
|
|
||||||
# Procesar cada DB
|
|
||||||
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
|
||||||
# Buscar el DB correspondiente en data_json
|
|
||||||
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == format_db["name"]), None)
|
|
||||||
if not data_db:
|
|
||||||
print(f"Error: No se encontró DB '{format_db['name']}' en data_json")
|
|
||||||
continue # No hay DB correspondiente en data_json
|
|
||||||
|
|
||||||
# Aplanar variables de ambos DBs
|
|
||||||
flat_data_vars = flatten_db_structure(data_db)
|
|
||||||
flat_format_vars = flatten_db_structure(format_db)
|
|
||||||
|
|
||||||
# Crear mapa de offset a variable para data
|
|
||||||
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars}
|
|
||||||
|
|
||||||
# Para cada variable en format, buscar su correspondiente en data por offset
|
|
||||||
for format_var in flat_format_vars:
|
|
||||||
offset = format_var["byte_offset"]
|
|
||||||
path = format_var["full_path"]
|
|
||||||
|
|
||||||
# Buscar la variable correspondiente en data_json por offset
|
|
||||||
if offset in data_by_offset:
|
|
||||||
data_var = data_by_offset[offset]
|
|
||||||
|
|
||||||
# Encontrar la variable original en la estructura jerárquica
|
|
||||||
path_parts = format_var["full_path"].split('.')
|
|
||||||
current_node = updated_json["dbs"][db_idx]
|
|
||||||
|
|
||||||
# Variable para rastrear si se encontró la ruta
|
|
||||||
path_found = True
|
|
||||||
|
|
||||||
# Navegar la jerarquía hasta encontrar el nodo padre
|
|
||||||
for i in range(len(path_parts) - 1):
|
|
||||||
if "members" in current_node:
|
|
||||||
# Buscar el miembro correspondiente
|
|
||||||
member_name = path_parts[i]
|
|
||||||
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
|
||||||
if matching_members:
|
|
||||||
current_node = matching_members[0]
|
|
||||||
else:
|
|
||||||
print(f"Error: No se encontró el miembro '{member_name}' en la ruta '{path}'")
|
|
||||||
path_found = False
|
|
||||||
break # No se encontró la ruta
|
|
||||||
elif "children" in current_node:
|
|
||||||
# Buscar el hijo correspondiente
|
|
||||||
child_name = path_parts[i]
|
|
||||||
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
|
||||||
if matching_children:
|
|
||||||
current_node = matching_children[0]
|
|
||||||
else:
|
|
||||||
print(f"Error: No se encontró el hijo '{child_name}' en la ruta '{path}'")
|
|
||||||
path_found = False
|
|
||||||
break # No se encontró la ruta
|
|
||||||
else:
|
|
||||||
print(f"Error: No se puede navegar más en la ruta '{path}', nodo actual no tiene members ni children")
|
|
||||||
path_found = False
|
|
||||||
break # No se puede navegar más
|
|
||||||
|
|
||||||
# Si encontramos el nodo padre, actualizar el hijo
|
|
||||||
if path_found and ("members" in current_node or "children" in current_node):
|
|
||||||
target_list = current_node.get("members", current_node.get("children", []))
|
|
||||||
target_name = path_parts[-1]
|
|
||||||
|
|
||||||
# Si es un elemento de array, extraer el nombre base y el índice
|
|
||||||
if '[' in target_name and ']' in target_name:
|
|
||||||
base_name = target_name.split('[')[0]
|
|
||||||
index_str = target_name[target_name.find('[')+1:target_name.find(']')]
|
|
||||||
|
|
||||||
# Buscar el array base
|
|
||||||
array_var = next((var for var in target_list if var["name"] == base_name), None)
|
|
||||||
if array_var:
|
|
||||||
# Asegurarse que existe current_element_values
|
|
||||||
if "current_element_values" not in array_var:
|
|
||||||
array_var["current_element_values"] = {}
|
|
||||||
|
|
||||||
# Copiar el valor del elemento del array
|
|
||||||
if "current_value" in data_var:
|
|
||||||
array_var["current_element_values"][index_str] = {
|
|
||||||
"value": data_var["current_value"],
|
|
||||||
"offset": data_var["byte_offset"]
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
# Buscar la variable a actualizar
|
|
||||||
target_var_found = False
|
|
||||||
for target_var in target_list:
|
|
||||||
if target_var["name"] == target_name:
|
|
||||||
target_var_found = True
|
|
||||||
|
|
||||||
# Limpiar y copiar initial_value si existe
|
|
||||||
if "initial_value" in target_var:
|
|
||||||
del target_var["initial_value"]
|
|
||||||
if "initial_value" in data_var and data_var["initial_value"] is not None:
|
|
||||||
target_var["initial_value"] = data_var["initial_value"]
|
|
||||||
|
|
||||||
# Limpiar y copiar current_value si existe
|
|
||||||
if "current_value" in target_var:
|
|
||||||
del target_var["current_value"]
|
|
||||||
if "current_value" in data_var and data_var["current_value"] is not None:
|
|
||||||
target_var["current_value"] = data_var["current_value"]
|
|
||||||
|
|
||||||
# Limpiar y copiar current_element_values si existe
|
|
||||||
if "current_element_values" in target_var:
|
|
||||||
del target_var["current_element_values"]
|
|
||||||
if "current_element_values" in data_var and data_var["current_element_values"]:
|
|
||||||
target_var["current_element_values"] = copy.deepcopy(data_var["current_element_values"])
|
|
||||||
|
|
||||||
break
|
|
||||||
|
|
||||||
if not target_var_found and not ('[' in target_name and ']' in target_name):
|
|
||||||
print(f"Error: No se encontró la variable '{target_name}' en la ruta '{path}'")
|
|
||||||
else:
|
|
||||||
# El offset no existe en data_json, reportar error
|
|
||||||
print(f"Error: Offset {offset} (para '{path}') no encontrado en los datos source (_data)")
|
|
||||||
|
|
||||||
# Eliminar valores si es una variable que no es elemento de array
|
|
||||||
if '[' not in path or ']' not in path:
|
|
||||||
# Encontrar la variable original en la estructura jerárquica
|
|
||||||
path_parts = path.split('.')
|
|
||||||
current_node = updated_json["dbs"][db_idx]
|
|
||||||
|
|
||||||
# Navegar hasta el nodo padre para limpiar valores
|
|
||||||
path_found = True
|
|
||||||
for i in range(len(path_parts) - 1):
|
|
||||||
if "members" in current_node:
|
|
||||||
member_name = path_parts[i]
|
|
||||||
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
|
||||||
if matching_members:
|
|
||||||
current_node = matching_members[0]
|
|
||||||
else:
|
|
||||||
path_found = False
|
|
||||||
break
|
|
||||||
elif "children" in current_node:
|
|
||||||
child_name = path_parts[i]
|
|
||||||
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
|
||||||
if matching_children:
|
|
||||||
current_node = matching_children[0]
|
|
||||||
else:
|
|
||||||
path_found = False
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
path_found = False
|
|
||||||
break
|
|
||||||
|
|
||||||
if path_found and ("members" in current_node or "children" in current_node):
|
|
||||||
target_list = current_node.get("members", current_node.get("children", []))
|
|
||||||
target_name = path_parts[-1]
|
|
||||||
|
|
||||||
for target_var in target_list:
|
|
||||||
if target_var["name"] == target_name:
|
|
||||||
# Eliminar valores iniciales y actuales
|
|
||||||
if "initial_value" in target_var:
|
|
||||||
del target_var["initial_value"]
|
|
||||||
if "current_value" in target_var:
|
|
||||||
del target_var["current_value"]
|
|
||||||
if "current_element_values" in target_var:
|
|
||||||
del target_var["current_element_values"]
|
|
||||||
break
|
|
||||||
|
|
||||||
return updated_json
|
|
||||||
|
|
||||||
def process_updated_json(updated_json: Dict, updated_json_path: str, working_dir: str, documentation_dir: str, original_format_file: str):
|
def process_updated_json(updated_json: Dict, updated_json_path: str, working_dir: str, documentation_dir: str, original_format_file: str):
|
||||||
"""
|
"""
|
||||||
|
@ -347,230 +179,451 @@ def process_updated_json(updated_json: Dict, updated_json_path: str, working_dir
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al generar archivo S7 para {base_name}: {e}")
|
print(f"Error al generar archivo S7 para {base_name}: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def create_updated_json(data_json: Dict, format_json: Dict) -> Dict:
|
||||||
|
"""
|
||||||
|
Creates an updated JSON based on the structure of _format with values from _data.
|
||||||
|
Uses offset as the primary key for finding corresponding variables.
|
||||||
|
Reports errors if a corresponding offset is not found.
|
||||||
|
"""
|
||||||
|
# Deep copy of format_json to avoid modifying the original
|
||||||
|
updated_json = copy.deepcopy(format_json)
|
||||||
|
|
||||||
|
# Process each DB
|
||||||
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
|
# Find corresponding DB in data_json
|
||||||
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == format_db["name"]), None)
|
||||||
|
if not data_db:
|
||||||
|
print(f"Error: DB '{format_db['name']}' not found in data_json")
|
||||||
|
continue # No corresponding DB in data_json
|
||||||
|
|
||||||
|
# Flatten variables from both DBs
|
||||||
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
|
|
||||||
|
# Create offset to variable map for data - ONLY include usable variables (SIMPLE_VAR and ARRAY_ELEMENT)
|
||||||
|
# This is the key fix: filter by element_type to avoid matching STRUCT and other non-value types
|
||||||
|
data_by_offset = {
|
||||||
|
var["byte_offset"]: var for var in flat_data_vars
|
||||||
|
if var.get("element_type") in ["SIMPLE_VAR", "ARRAY_ELEMENT"]
|
||||||
|
}
|
||||||
|
|
||||||
|
# For each variable in format, find its corresponding in data by offset
|
||||||
|
for format_var in flat_format_vars:
|
||||||
|
# Only process variables and array elements, not structures or UDT instances
|
||||||
|
if format_var.get("element_type") not in ["SIMPLE_VAR", "ARRAY_ELEMENT"]:
|
||||||
|
continue
|
||||||
|
|
||||||
|
offset = format_var["byte_offset"]
|
||||||
|
path = format_var["full_path"]
|
||||||
|
|
||||||
|
# Find the corresponding variable in data_json by offset
|
||||||
|
if offset in data_by_offset:
|
||||||
|
data_var = data_by_offset[offset]
|
||||||
|
|
||||||
|
# Even though we've filtered the data variables, double-check element types
|
||||||
|
format_element_type = format_var.get("element_type")
|
||||||
|
data_element_type = data_var.get("element_type")
|
||||||
|
|
||||||
|
# Only copy values if element types are compatible
|
||||||
|
if format_element_type == data_element_type or (
|
||||||
|
format_element_type in ["SIMPLE_VAR", "ARRAY_ELEMENT"] and
|
||||||
|
data_element_type in ["SIMPLE_VAR", "ARRAY_ELEMENT"]
|
||||||
|
):
|
||||||
|
# Find the original variable in the hierarchical structure
|
||||||
|
path_parts = format_var["full_path"].split('.')
|
||||||
|
current_node = updated_json["dbs"][db_idx]
|
||||||
|
|
||||||
|
# Variable to track if the path was found
|
||||||
|
path_found = True
|
||||||
|
|
||||||
|
# Navigate the hierarchy to find the parent node
|
||||||
|
for i in range(len(path_parts) - 1):
|
||||||
|
if "members" in current_node:
|
||||||
|
# Find the corresponding member
|
||||||
|
member_name = path_parts[i]
|
||||||
|
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
||||||
|
if matching_members:
|
||||||
|
current_node = matching_members[0]
|
||||||
|
else:
|
||||||
|
print(f"Error: Member '{member_name}' not found in path '{path}'")
|
||||||
|
path_found = False
|
||||||
|
break # Path not found
|
||||||
|
elif "children" in current_node:
|
||||||
|
# Find the corresponding child
|
||||||
|
child_name = path_parts[i]
|
||||||
|
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
||||||
|
if matching_children:
|
||||||
|
current_node = matching_children[0]
|
||||||
|
else:
|
||||||
|
print(f"Error: Child '{child_name}' not found in path '{path}'")
|
||||||
|
path_found = False
|
||||||
|
break # Path not found
|
||||||
|
else:
|
||||||
|
print(f"Error: Cannot navigate further in path '{path}', current node has no members or children")
|
||||||
|
path_found = False
|
||||||
|
break # Cannot navigate further
|
||||||
|
|
||||||
|
# If parent node found, update the child
|
||||||
|
if path_found and ("members" in current_node or "children" in current_node):
|
||||||
|
target_list = current_node.get("members", current_node.get("children", []))
|
||||||
|
target_name = path_parts[-1]
|
||||||
|
|
||||||
|
# If it's an array element, extract the base name and index
|
||||||
|
if '[' in target_name and ']' in target_name:
|
||||||
|
base_name = target_name.split('[')[0]
|
||||||
|
index_str = target_name[target_name.find('[')+1:target_name.find(']')]
|
||||||
|
|
||||||
|
# Find the base array
|
||||||
|
array_var = next((var for var in target_list if var["name"] == base_name), None)
|
||||||
|
if array_var:
|
||||||
|
# Ensure current_element_values exists
|
||||||
|
if "current_element_values" not in array_var:
|
||||||
|
array_var["current_element_values"] = {}
|
||||||
|
|
||||||
|
# Copy the array element value
|
||||||
|
if "current_value" in data_var:
|
||||||
|
array_var["current_element_values"][index_str] = {
|
||||||
|
"value": data_var["current_value"],
|
||||||
|
"offset": data_var["byte_offset"]
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Find the variable to update
|
||||||
|
target_var_found = False
|
||||||
|
for target_var in target_list:
|
||||||
|
if target_var["name"] == target_name:
|
||||||
|
target_var_found = True
|
||||||
|
|
||||||
|
# Clean and copy initial_value if exists
|
||||||
|
if "initial_value" in target_var:
|
||||||
|
del target_var["initial_value"]
|
||||||
|
if "initial_value" in data_var and data_var["initial_value"] is not None:
|
||||||
|
target_var["initial_value"] = data_var["initial_value"]
|
||||||
|
|
||||||
|
# Clean and copy current_value if exists
|
||||||
|
if "current_value" in target_var:
|
||||||
|
del target_var["current_value"]
|
||||||
|
if "current_value" in data_var and data_var["current_value"] is not None:
|
||||||
|
target_var["current_value"] = data_var["current_value"]
|
||||||
|
|
||||||
|
# Clean and copy current_element_values if exists
|
||||||
|
if "current_element_values" in target_var:
|
||||||
|
del target_var["current_element_values"]
|
||||||
|
if "current_element_values" in data_var and data_var["current_element_values"]:
|
||||||
|
target_var["current_element_values"] = copy.deepcopy(data_var["current_element_values"])
|
||||||
|
|
||||||
|
break
|
||||||
|
|
||||||
|
if not target_var_found and not ('[' in target_name and ']' in target_name):
|
||||||
|
print(f"Error: Variable '{target_name}' not found in path '{path}'")
|
||||||
|
else:
|
||||||
|
print(f"Warning: Element types don't match at offset {offset} for '{path}': {format_element_type} vs {data_element_type}")
|
||||||
|
else:
|
||||||
|
# Offset not found in data_json, report error
|
||||||
|
print(f"Error: Offset {offset} (for '{path}') not found in source data (_data)")
|
||||||
|
|
||||||
|
# Clear values if it's not an array element
|
||||||
|
if '[' not in path or ']' not in path:
|
||||||
|
# Find the original variable in the hierarchical structure
|
||||||
|
path_parts = path.split('.')
|
||||||
|
current_node = updated_json["dbs"][db_idx]
|
||||||
|
|
||||||
|
# Navigate to the parent node to clean values
|
||||||
|
path_found = True
|
||||||
|
for i in range(len(path_parts) - 1):
|
||||||
|
if "members" in current_node:
|
||||||
|
member_name = path_parts[i]
|
||||||
|
matching_members = [m for m in current_node["members"] if m["name"] == member_name]
|
||||||
|
if matching_members:
|
||||||
|
current_node = matching_members[0]
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
elif "children" in current_node:
|
||||||
|
child_name = path_parts[i]
|
||||||
|
matching_children = [c for c in current_node["children"] if c["name"] == child_name]
|
||||||
|
if matching_children:
|
||||||
|
current_node = matching_children[0]
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
path_found = False
|
||||||
|
break
|
||||||
|
|
||||||
|
if path_found and ("members" in current_node or "children" in current_node):
|
||||||
|
target_list = current_node.get("members", current_node.get("children", []))
|
||||||
|
target_name = path_parts[-1]
|
||||||
|
|
||||||
|
for target_var in target_list:
|
||||||
|
if target_var["name"] == target_name:
|
||||||
|
# Remove initial and current values
|
||||||
|
if "initial_value" in target_var:
|
||||||
|
del target_var["initial_value"]
|
||||||
|
if "current_value" in target_var:
|
||||||
|
del target_var["current_value"]
|
||||||
|
if "current_element_values" in target_var:
|
||||||
|
del target_var["current_element_values"]
|
||||||
|
break
|
||||||
|
|
||||||
|
return updated_json
|
||||||
|
|
||||||
def generate_comparison_excel(format_json: Dict, data_json: Dict, updated_json: Dict, excel_filename: str):
|
def generate_comparison_excel(format_json: Dict, data_json: Dict, updated_json: Dict, excel_filename: str):
|
||||||
"""
|
"""
|
||||||
Genera un archivo Excel con dos hojas que comparan los valores iniciales y actuales
|
Generates a comprehensive Excel file comparing values between format, data and updated JSONs.
|
||||||
entre los archivos format_json, data_json y updated_json.
|
Uses flatten_db_structure and matches by offset, leveraging element_type for better filtering.
|
||||||
Filtra STRUCTs y solo compara variables con valores reales.
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
format_json: JSON con la estructura y nombres de formato
|
format_json: JSON with the structure and names from format
|
||||||
data_json: JSON con los datos source
|
data_json: JSON with the source data
|
||||||
updated_json: JSON con los datos actualizados
|
updated_json: JSON with the updated data
|
||||||
excel_filename: Ruta del archivo Excel a generar
|
excel_filename: Path to the Excel file to generate
|
||||||
"""
|
"""
|
||||||
import openpyxl
|
import openpyxl
|
||||||
from openpyxl.utils import get_column_letter
|
from openpyxl.utils import get_column_letter
|
||||||
from openpyxl.styles import PatternFill, Font
|
from openpyxl.styles import PatternFill, Font, Alignment, Border, Side
|
||||||
|
|
||||||
# Crear un nuevo libro de Excel
|
# Create a new Excel workbook
|
||||||
workbook = openpyxl.Workbook()
|
workbook = openpyxl.Workbook()
|
||||||
|
sheet = workbook.active
|
||||||
|
sheet.title = "Value_Comparison"
|
||||||
|
|
||||||
# Definir estilos para resaltar diferencias
|
# Define styles
|
||||||
diff_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid") # Amarillo
|
diff_fill = PatternFill(start_color="FFFF00", end_color="FFFF00", fill_type="solid")
|
||||||
|
type_mismatch_fill = PatternFill(start_color="FF9999", end_color="FF9999", fill_type="solid") # Light red
|
||||||
header_font = Font(bold=True)
|
header_font = Font(bold=True)
|
||||||
|
header_fill = PatternFill(start_color="DDDDDD", end_color="DDDDDD", fill_type="solid")
|
||||||
|
thin_border = Border(left=Side(style='thin'), right=Side(style='thin'),
|
||||||
|
top=Side(style='thin'), bottom=Side(style='thin'))
|
||||||
|
|
||||||
# Procesar cada DB
|
# Set up headers
|
||||||
|
headers = ["Address", "Name", "Type", "Element Type",
|
||||||
|
"Format Initial", "Data Initial", "Updated Initial",
|
||||||
|
"Format Current", "Data Current", "Updated Current",
|
||||||
|
"Type Match", "Value Differences"]
|
||||||
|
|
||||||
|
for col_num, header in enumerate(headers, 1):
|
||||||
|
cell = sheet.cell(row=1, column=col_num, value=header)
|
||||||
|
cell.font = header_font
|
||||||
|
cell.fill = header_fill
|
||||||
|
cell.border = thin_border
|
||||||
|
cell.alignment = Alignment(horizontal='center')
|
||||||
|
|
||||||
|
# Freeze top row
|
||||||
|
sheet.freeze_panes = "A2"
|
||||||
|
|
||||||
|
current_row = 2
|
||||||
|
|
||||||
|
# Process each DB
|
||||||
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
# Buscar los DBs correspondientes
|
|
||||||
db_name = format_db["name"]
|
db_name = format_db["name"]
|
||||||
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == db_name), None)
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
updated_db = next((db for db in updated_json.get("dbs", []) if db["name"] == db_name), None)
|
updated_db = next((db for db in updated_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
|
|
||||||
if not data_db or not updated_db:
|
if not data_db or not updated_db:
|
||||||
print(f"Error: No se encontró el DB '{db_name}' en alguno de los archivos JSON")
|
print(f"Error: DB '{db_name}' not found in one of the JSON files")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Crear hojas para valores iniciales y actuales para este DB
|
# Add DB name as section header with merged cells
|
||||||
initial_sheet = workbook.active if db_idx == 0 else workbook.create_sheet()
|
sheet.merge_cells(f'A{current_row}:L{current_row}')
|
||||||
initial_sheet.title = f"{db_name}_Initial"[:31] # Limitar longitud del nombre de hoja
|
header_cell = sheet.cell(row=current_row, column=1, value=f"DB: {db_name}")
|
||||||
|
header_cell.font = Font(bold=True, size=12)
|
||||||
|
header_cell.fill = PatternFill(start_color="CCCCFF", end_color="CCCCFF", fill_type="solid") # Light blue
|
||||||
|
header_cell.alignment = Alignment(horizontal='center')
|
||||||
|
current_row += 1
|
||||||
|
|
||||||
current_sheet = workbook.create_sheet()
|
# Get flattened variables from all sources
|
||||||
current_sheet.title = f"{db_name}_Current"[:31]
|
|
||||||
|
|
||||||
# Aplanar variables de los tres DBs
|
|
||||||
flat_format_vars = flatten_db_structure(format_db)
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
flat_data_vars = flatten_db_structure(data_db)
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
flat_updated_vars = flatten_db_structure(updated_db)
|
flat_updated_vars = flatten_db_structure(updated_db)
|
||||||
|
|
||||||
# Filtrar STRUCTs - solo trabajamos con variables que tienen valores reales
|
# Create maps by offset for quick lookup
|
||||||
flat_format_vars = [var for var in flat_format_vars
|
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars}
|
||||||
if var["data_type"].upper() != "STRUCT" and not var.get("children")]
|
updated_by_offset = {var["byte_offset"]: var for var in flat_updated_vars}
|
||||||
|
|
||||||
# Crear mapas de offset a variable para búsqueda rápida
|
|
||||||
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars
|
|
||||||
if var["data_type"].upper() != "STRUCT" and not var.get("children")}
|
|
||||||
updated_by_offset = {var["byte_offset"]: var for var in flat_updated_vars
|
|
||||||
if var["data_type"].upper() != "STRUCT" and not var.get("children")}
|
|
||||||
|
|
||||||
# Configurar encabezados para la hoja de valores iniciales
|
|
||||||
headers_initial = ["Address", "Name", "Type", "Format Initial", "Data Initial", "Updated Initial", "Difference"]
|
|
||||||
for col_num, header in enumerate(headers_initial, 1):
|
|
||||||
cell = initial_sheet.cell(row=1, column=col_num, value=header)
|
|
||||||
cell.font = header_font
|
|
||||||
|
|
||||||
# Configurar encabezados para la hoja de valores actuales
|
|
||||||
headers_current = ["Address", "Name", "Type", "Format Current", "Data Current", "Updated Current", "Difference"]
|
|
||||||
for col_num, header in enumerate(headers_current, 1):
|
|
||||||
cell = current_sheet.cell(row=1, column=col_num, value=header)
|
|
||||||
cell.font = header_font
|
|
||||||
|
|
||||||
# Llenar las hojas con datos
|
|
||||||
initial_row = 2
|
|
||||||
current_row = 2
|
|
||||||
|
|
||||||
|
# Process each variable from format_json
|
||||||
for format_var in flat_format_vars:
|
for format_var in flat_format_vars:
|
||||||
|
# Skip certain types based on element_type
|
||||||
|
element_type = format_var.get("element_type", "UNKNOWN")
|
||||||
|
|
||||||
|
# Skip STRUCT types with no values, but include ARRAY and UDT_INSTANCE types
|
||||||
|
if element_type == "STRUCT" and not format_var.get("children"):
|
||||||
|
continue
|
||||||
|
|
||||||
offset = format_var["byte_offset"]
|
offset = format_var["byte_offset"]
|
||||||
path = format_var["full_path"]
|
path = format_var["full_path"]
|
||||||
data_type = format_data_type_for_source(format_var)
|
data_type = format_data_type_for_source(format_var)
|
||||||
address = format_var.get("address_display", format_address_for_display(offset, format_var.get("bit_size", 0)))
|
address = format_var.get("address_display", format_address_for_display(offset, format_var.get("bit_size", 0)))
|
||||||
|
|
||||||
# Obtener variables correspondientes por offset
|
# Find corresponding variables by offset
|
||||||
data_var = data_by_offset.get(offset)
|
data_var = data_by_offset.get(offset)
|
||||||
updated_var = updated_by_offset.get(offset)
|
updated_var = updated_by_offset.get(offset)
|
||||||
|
|
||||||
# Procesar valores iniciales (solo si la variable puede tener initial_value)
|
# Compare element types
|
||||||
format_initial = format_var.get("initial_value", "")
|
data_element_type = data_var.get("element_type", "MISSING") if data_var else "MISSING"
|
||||||
data_initial = data_var.get("initial_value", "") if data_var else ""
|
updated_element_type = updated_var.get("element_type", "MISSING") if updated_var else "MISSING"
|
||||||
updated_initial = updated_var.get("initial_value", "") if updated_var else ""
|
|
||||||
|
|
||||||
# Solo incluir en la hoja de valores iniciales si al menos uno tiene valor inicial
|
# Determine type compatibility
|
||||||
if format_initial or data_initial or updated_initial:
|
type_match = "Yes"
|
||||||
# Determinar si hay diferencias en valores iniciales
|
if data_var and element_type != data_element_type:
|
||||||
has_initial_diff = (format_initial != data_initial or
|
# Check for compatible types
|
||||||
format_initial != updated_initial or
|
if (element_type in ["SIMPLE_VAR", "ARRAY_ELEMENT"] and
|
||||||
data_initial != updated_initial)
|
data_element_type in ["SIMPLE_VAR", "ARRAY_ELEMENT"]):
|
||||||
|
type_match = "Compatible"
|
||||||
# Escribir datos de valores iniciales
|
|
||||||
initial_sheet.cell(row=initial_row, column=1, value=address)
|
|
||||||
initial_sheet.cell(row=initial_row, column=2, value=path)
|
|
||||||
initial_sheet.cell(row=initial_row, column=3, value=data_type)
|
|
||||||
initial_sheet.cell(row=initial_row, column=4, value=str(format_initial))
|
|
||||||
initial_sheet.cell(row=initial_row, column=5, value=str(data_initial))
|
|
||||||
initial_sheet.cell(row=initial_row, column=6, value=str(updated_initial))
|
|
||||||
|
|
||||||
# Resaltar diferencias en valores iniciales
|
|
||||||
if has_initial_diff:
|
|
||||||
initial_sheet.cell(row=initial_row, column=7, value="Sí")
|
|
||||||
for col in range(4, 7):
|
|
||||||
initial_sheet.cell(row=initial_row, column=col).fill = diff_fill
|
|
||||||
else:
|
else:
|
||||||
initial_sheet.cell(row=initial_row, column=7, value="No")
|
type_match = "No"
|
||||||
|
elif not data_var:
|
||||||
initial_row += 1
|
type_match = "Missing"
|
||||||
|
|
||||||
# Procesar valores actuales
|
# Get values (with empty string defaults)
|
||||||
format_current = format_var.get("current_value", "")
|
format_initial = str(format_var.get("initial_value", ""))
|
||||||
data_current = data_var.get("current_value", "") if data_var else ""
|
data_initial = str(data_var.get("initial_value", "")) if data_var else ""
|
||||||
updated_current = updated_var.get("current_value", "") if updated_var else ""
|
updated_initial = str(updated_var.get("initial_value", "")) if updated_var else ""
|
||||||
|
|
||||||
# Solo incluir en la hoja de valores actuales si al menos uno tiene valor actual
|
format_current = str(format_var.get("current_value", ""))
|
||||||
if format_current or data_current or updated_current:
|
data_current = str(data_var.get("current_value", "")) if data_var else ""
|
||||||
# Determinar si hay diferencias en valores actuales
|
updated_current = str(updated_var.get("current_value", "")) if updated_var else ""
|
||||||
has_current_diff = (format_current != data_current or
|
|
||||||
format_current != updated_current or
|
|
||||||
data_current != updated_current)
|
|
||||||
|
|
||||||
# Escribir datos de valores actuales
|
|
||||||
current_sheet.cell(row=current_row, column=1, value=address)
|
|
||||||
current_sheet.cell(row=current_row, column=2, value=path)
|
|
||||||
current_sheet.cell(row=current_row, column=3, value=data_type)
|
|
||||||
current_sheet.cell(row=current_row, column=4, value=str(format_current))
|
|
||||||
current_sheet.cell(row=current_row, column=5, value=str(data_current))
|
|
||||||
current_sheet.cell(row=current_row, column=6, value=str(updated_current))
|
|
||||||
|
|
||||||
# Resaltar diferencias en valores actuales
|
|
||||||
if has_current_diff:
|
|
||||||
current_sheet.cell(row=current_row, column=7, value="Sí")
|
|
||||||
for col in range(4, 7):
|
|
||||||
current_sheet.cell(row=current_row, column=col).fill = diff_fill
|
|
||||||
else:
|
|
||||||
current_sheet.cell(row=current_row, column=7, value="No")
|
|
||||||
|
|
||||||
current_row += 1
|
|
||||||
|
|
||||||
# Si es un array, procesamos también sus elementos
|
# Check for differences
|
||||||
if format_var.get("current_element_values") or (data_var and data_var.get("current_element_values")) or (updated_var and updated_var.get("current_element_values")):
|
has_initial_diff = (format_initial != data_initial or
|
||||||
format_elements = format_var.get("current_element_values", {})
|
format_initial != updated_initial or
|
||||||
data_elements = data_var.get("current_element_values", {}) if data_var else {}
|
data_initial != updated_initial)
|
||||||
updated_elements = updated_var.get("current_element_values", {}) if updated_var else {}
|
|
||||||
|
has_current_diff = (format_current != data_current or
|
||||||
# Unir todos los índices disponibles
|
format_current != updated_current or
|
||||||
all_indices = set(list(format_elements.keys()) +
|
data_current != updated_current)
|
||||||
list(data_elements.keys()) +
|
|
||||||
list(updated_elements.keys()))
|
# Create detailed difference description
|
||||||
|
diff_desc = []
|
||||||
# Ordenar índices numéricamente
|
if has_initial_diff:
|
||||||
sorted_indices = sorted(all_indices, key=lambda x: [int(i) for i in x.split(',')]) if all_indices else []
|
diff_desc.append("Initial values differ")
|
||||||
|
if has_current_diff:
|
||||||
for idx in sorted_indices:
|
diff_desc.append("Current values differ")
|
||||||
elem_path = f"{path}[{idx}]"
|
if not diff_desc:
|
||||||
|
diff_desc.append("None")
|
||||||
# Valores actuales para elementos de array
|
|
||||||
format_elem_val = ""
|
# Write data
|
||||||
if idx in format_elements:
|
sheet.cell(row=current_row, column=1, value=address)
|
||||||
if isinstance(format_elements[idx], dict) and "value" in format_elements[idx]:
|
sheet.cell(row=current_row, column=2, value=path)
|
||||||
format_elem_val = format_elements[idx]["value"]
|
sheet.cell(row=current_row, column=3, value=data_type)
|
||||||
else:
|
sheet.cell(row=current_row, column=4, value=element_type)
|
||||||
format_elem_val = format_elements[idx]
|
sheet.cell(row=current_row, column=5, value=format_initial)
|
||||||
|
sheet.cell(row=current_row, column=6, value=data_initial)
|
||||||
data_elem_val = ""
|
sheet.cell(row=current_row, column=7, value=updated_initial)
|
||||||
if idx in data_elements:
|
sheet.cell(row=current_row, column=8, value=format_current)
|
||||||
if isinstance(data_elements[idx], dict) and "value" in data_elements[idx]:
|
sheet.cell(row=current_row, column=9, value=data_current)
|
||||||
data_elem_val = data_elements[idx]["value"]
|
sheet.cell(row=current_row, column=10, value=updated_current)
|
||||||
else:
|
sheet.cell(row=current_row, column=11, value=type_match)
|
||||||
data_elem_val = data_elements[idx]
|
sheet.cell(row=current_row, column=12, value=", ".join(diff_desc))
|
||||||
|
|
||||||
updated_elem_val = ""
|
# Add borders to all cells
|
||||||
if idx in updated_elements:
|
for col in range(1, 13):
|
||||||
if isinstance(updated_elements[idx], dict) and "value" in updated_elements[idx]:
|
sheet.cell(row=current_row, column=col).border = thin_border
|
||||||
updated_elem_val = updated_elements[idx]["value"]
|
|
||||||
else:
|
# Highlight differences
|
||||||
updated_elem_val = updated_elements[idx]
|
if has_initial_diff:
|
||||||
|
for col in range(5, 8):
|
||||||
# Determinar si hay diferencias
|
sheet.cell(row=current_row, column=col).fill = diff_fill
|
||||||
has_elem_diff = (str(format_elem_val) != str(data_elem_val) or
|
|
||||||
str(format_elem_val) != str(updated_elem_val) or
|
if has_current_diff:
|
||||||
str(data_elem_val) != str(updated_elem_val))
|
for col in range(8, 11):
|
||||||
|
sheet.cell(row=current_row, column=col).fill = diff_fill
|
||||||
# Escribir datos de elementos de array (solo en hoja de valores actuales)
|
|
||||||
current_sheet.cell(row=current_row, column=1, value=address)
|
# Highlight type mismatches
|
||||||
current_sheet.cell(row=current_row, column=2, value=elem_path)
|
if type_match == "No" or type_match == "Missing":
|
||||||
current_sheet.cell(row=current_row, column=3, value=data_type.replace("ARRAY", "").strip())
|
sheet.cell(row=current_row, column=11).fill = type_mismatch_fill
|
||||||
current_sheet.cell(row=current_row, column=4, value=str(format_elem_val))
|
|
||||||
current_sheet.cell(row=current_row, column=5, value=str(data_elem_val))
|
current_row += 1
|
||||||
current_sheet.cell(row=current_row, column=6, value=str(updated_elem_val))
|
|
||||||
|
# Add filter to headers
|
||||||
# Resaltar diferencias
|
sheet.auto_filter.ref = f"A1:L{current_row-1}"
|
||||||
if has_elem_diff:
|
|
||||||
current_sheet.cell(row=current_row, column=7, value="Sí")
|
# Auto-adjust column widths
|
||||||
for col in range(4, 7):
|
for col_idx, column_cells in enumerate(sheet.columns, 1):
|
||||||
current_sheet.cell(row=current_row, column=col).fill = diff_fill
|
max_length = 0
|
||||||
else:
|
column = get_column_letter(col_idx)
|
||||||
current_sheet.cell(row=current_row, column=7, value="No")
|
for cell in column_cells:
|
||||||
|
try:
|
||||||
current_row += 1
|
if len(str(cell.value)) > max_length:
|
||||||
|
max_length = len(str(cell.value))
|
||||||
# Auto-ajustar anchos de columna
|
except:
|
||||||
for sheet in [initial_sheet, current_sheet]:
|
pass
|
||||||
for col_idx, column_cells in enumerate(sheet.columns, 1):
|
adjusted_width = min(max_length + 2, 100) # Limit maximum width
|
||||||
max_length = 0
|
sheet.column_dimensions[column].width = adjusted_width
|
||||||
column = get_column_letter(col_idx)
|
|
||||||
for cell in column_cells:
|
# Add a summary sheet
|
||||||
try:
|
summary_sheet = workbook.create_sheet(title="Summary")
|
||||||
if len(str(cell.value)) > max_length:
|
summary_sheet.column_dimensions['A'].width = 30
|
||||||
max_length = len(str(cell.value))
|
summary_sheet.column_dimensions['B'].width = 15
|
||||||
except:
|
summary_sheet.column_dimensions['C'].width = 50
|
||||||
pass
|
|
||||||
adjusted_width = min(max_length + 2, 100) # Limitar ancho máximo
|
# Add header to summary
|
||||||
sheet.column_dimensions[column].width = adjusted_width
|
summary_headers = ["Database", "Item Count", "Notes"]
|
||||||
|
for col_num, header in enumerate(summary_headers, 1):
|
||||||
|
cell = summary_sheet.cell(row=1, column=col_num, value=header)
|
||||||
|
cell.font = header_font
|
||||||
|
cell.fill = header_fill
|
||||||
|
|
||||||
|
# Add summary data
|
||||||
|
summary_row = 2
|
||||||
|
for db_idx, format_db in enumerate(format_json.get("dbs", [])):
|
||||||
|
db_name = format_db["name"]
|
||||||
|
data_db = next((db for db in data_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
|
updated_db = next((db for db in updated_json.get("dbs", []) if db["name"] == db_name), None)
|
||||||
|
|
||||||
|
if not data_db or not updated_db:
|
||||||
|
continue
|
||||||
|
|
||||||
|
flat_format_vars = flatten_db_structure(format_db)
|
||||||
|
flat_data_vars = flatten_db_structure(data_db)
|
||||||
|
|
||||||
|
# Count by element type
|
||||||
|
format_type_counts = {}
|
||||||
|
for var in flat_format_vars:
|
||||||
|
element_type = var.get("element_type", "UNKNOWN")
|
||||||
|
format_type_counts[element_type] = format_type_counts.get(element_type, 0) + 1
|
||||||
|
|
||||||
|
# Count value differences
|
||||||
|
data_by_offset = {var["byte_offset"]: var for var in flat_data_vars}
|
||||||
|
diff_count = 0
|
||||||
|
type_mismatch_count = 0
|
||||||
|
|
||||||
|
for format_var in flat_format_vars:
|
||||||
|
offset = format_var["byte_offset"]
|
||||||
|
data_var = data_by_offset.get(offset)
|
||||||
|
|
||||||
|
if data_var:
|
||||||
|
# Check for type mismatch
|
||||||
|
if format_var.get("element_type") != data_var.get("element_type"):
|
||||||
|
type_mismatch_count += 1
|
||||||
|
|
||||||
|
# Check for value differences
|
||||||
|
format_initial = str(format_var.get("initial_value", ""))
|
||||||
|
data_initial = str(data_var.get("initial_value", ""))
|
||||||
|
format_current = str(format_var.get("current_value", ""))
|
||||||
|
data_current = str(data_var.get("current_value", ""))
|
||||||
|
|
||||||
|
if format_initial != data_initial or format_current != data_current:
|
||||||
|
diff_count += 1
|
||||||
|
|
||||||
|
# Write to summary
|
||||||
|
summary_sheet.cell(row=summary_row, column=1, value=db_name)
|
||||||
|
summary_sheet.cell(row=summary_row, column=2, value=len(flat_format_vars))
|
||||||
|
|
||||||
|
notes = []
|
||||||
|
for element_type, count in format_type_counts.items():
|
||||||
|
notes.append(f"{element_type}: {count}")
|
||||||
|
notes.append(f"Value differences: {diff_count}")
|
||||||
|
notes.append(f"Type mismatches: {type_mismatch_count}")
|
||||||
|
|
||||||
|
summary_sheet.cell(row=summary_row, column=3, value=", ".join(notes))
|
||||||
|
summary_row += 1
|
||||||
|
|
||||||
# Guardar el archivo Excel
|
|
||||||
try:
|
try:
|
||||||
workbook.save(excel_filename)
|
workbook.save(excel_filename)
|
||||||
print(f"Archivo de comparación Excel generado: {excel_filename}")
|
print(f"Comparison Excel file generated: {excel_filename}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error al escribir el archivo Excel {excel_filename}: {e}")
|
print(f"Error writing Excel file {excel_filename}: {e}")
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
working_dir = find_working_directory()
|
working_dir = find_working_directory()
|
||||||
|
|
42
data/log.txt
42
data/log.txt
|
@ -1,21 +1,21 @@
|
||||||
[02:56:24] Iniciando ejecución de x7_value_updater.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
[13:21:37] Iniciando ejecución de x7_value_updater.py en C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001...
|
||||||
[02:56:24] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
[13:21:37] Using working directory: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001
|
||||||
[02:56:24] Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
[13:21:37] Los archivos JSON se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json
|
||||||
[02:56:24] Los archivos de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
[13:21:37] Los archivos de documentación se guardarán en: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation
|
||||||
[02:56:24] Se encontraron 1 pares de archivos para procesar.
|
[13:21:37] Se encontraron 1 pares de archivos para procesar.
|
||||||
[02:56:24] --- Procesando par de archivos ---
|
[13:21:37] --- Procesando par de archivos ---
|
||||||
[02:56:24] Data file: db1001_data.db
|
[13:21:37] Data file: db1001_data.db
|
||||||
[02:56:24] Format file: db1001_format.db
|
[13:21:37] Format file: db1001_format.db
|
||||||
[02:56:24] Parseando archivo data: db1001_data.db
|
[13:21:37] Parseando archivo data: db1001_data.db
|
||||||
[02:56:24] Parseando archivo format: db1001_format.db
|
[13:21:37] Parseando archivo format: db1001_format.db
|
||||||
[02:56:24] Archivos JSON generados: db1001_data.json y db1001_format.json
|
[13:21:37] Archivos JSON generados: db1001_data.json y db1001_format.json
|
||||||
[02:56:24] Comparando estructuras para DB 'HMI_Blender_Parameters': 284 variables en _data, 284 variables en _format
|
[13:21:37] Comparando estructuras para DB 'HMI_Blender_Parameters': 284 variables en _data, 284 variables en _format
|
||||||
[02:56:24] Los archivos son compatibles. Creando el archivo _updated...
|
[13:21:37] Los archivos son compatibles. Creando el archivo _updated...
|
||||||
[02:56:24] Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
[13:21:37] Archivo _updated generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\json\db1001_updated.json
|
||||||
[02:56:25] Archivo de comparación Excel generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
[13:21:38] Comparison Excel file generated: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_comparison.xlsx
|
||||||
[02:56:25] Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
[13:21:38] Archivo Markdown generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.md
|
||||||
[02:56:25] Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
[13:21:38] Archivo S7 generado: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\documentation\db1001_updated.txt
|
||||||
[02:56:25] Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
[13:21:38] Archivo S7 copiado a: C:\Trabajo\SIDEL\09 - SAE452 - Diet as Regular - San Giovanni in Bosco\Reporte\DB1001\db1001_updated.db
|
||||||
[02:56:25] --- Proceso completado ---
|
[13:21:38] --- Proceso completado ---
|
||||||
[02:56:25] Ejecución de x7_value_updater.py finalizada (success). Duración: 0:00:00.761362.
|
[13:21:38] Ejecución de x7_value_updater.py finalizada (success). Duración: 0:00:01.043746.
|
||||||
[02:56:25] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x7_value_updater.txt
|
[13:21:38] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\S7_DB_Utils\log_x7_value_updater.txt
|
||||||
|
|
Loading…
Reference in New Issue