Creado nuevo grupo de scripts IO_adaptation

This commit is contained in:
Miguel 2025-05-14 18:04:17 +02:00
parent bf75f6d4d0
commit 6e36186012
17 changed files with 1606 additions and 47 deletions

View File

@ -0,0 +1,125 @@
# Adaptación I/O entre Hardware PLC y Software Master
## Tabla de Adaptación
| IO | Master Tag | PLC Description | Master Description | Certeza | Alternative |
| --- | --- | --- | --- | --- | --- |
| I0.0 | DI_AuxVoltage_On | AUXILIARY CIRC. ON AUSILIARI INSERITI | Electrical Panel Restored | Alto | |
| I0.1 | DI_PB_HornReset | SIREN RESET RESET SIRENA | PB Horn Reset | Alto | |
| I0.2 | DI_AlarmReset | RESET RESET | PB Machine Reset | Alto | |
| I0.3 | DI_PB_Machine_Stop | MACHINE STOP ARRESTO MACCHINA | PB Machine Stop | Alto | |
| I0.4 | DI_PB_Machine_Start | MARCIA MACCHINA MACHINE START | PB Machine Start | Alto | |
| I0.5 | DI_Emergency_Pilz_On | PRESENZA TENSIONE VOLTAGE PRESENCE | Pilz Emergency | Medio | DI_UPSsupply, DI_AuxVoltage_On |
| I0.6 | DI_LSN301L | SONDA LIVELLO MINIMO DEAREATORE 1 | LSN301_L - Deaireator Tank Minimun Level | Alto | |
| I0.7 | DI_Min_Syrup_Level | SONDA LIVELLO MINIMO SCIRO - PPC SYRUP MINIMUM LEVEL PROBE | - Syrup Tank Minimun Level | Alto | |
| I1.0 | DI_LSM302L | SONDA LIVELLO MINIMO SERB.STOCCAGGIO - STORE TANK MIN LEVEL PROBE | LSM302_L - Product Tank Minimun Level | Alto | |
| I1.5 | DI_RMM301_Closed | VALVOLA CHIUSA VM1 - CLOSED VALVE VM1 | RMM301 - Feedback OFF (VM1WATER) | Alto | |
| I1.6 | DI_RMP302_Closed | CLOSED VALVE VM2 - VALVOLA CHIUSA VM2 | RMP302 - Feedback OFF (VM2 SYRUP) | Alto | |
| I1.7 | DI_RMM303_Closed | VALVOLA CHIUSA VM3 - CLOSED VALVE VM3 | RMM303 - Feedback OFF (VM3 CO2) | Alto | |
| I2.0 | DI_PPN301_Ovrld | WATER PUMP OVERLOAD - TERMICO POMPA ACQUA | PPN301 - Deaireator Pump Overload | Alto | |
| I2.1 | PPN301_SoftStart_Averia | AVARIA POMPA ACQUA - WATER PUMP FAULT | PPN301_SoftStart_Averia | Alto | |
| I2.2 | DI_PPP302_Ovrld | SYRUP PUMP OVERLOAD - TERMICO POMPA SCIROPPC | PPP302 - Syrup Pump Overload | Alto | |
| I2.3 | DI_PPP302_Contactor | AVARIA POMPA SCIROPPO - SYRUP PUMP FAULT | PPP302 - Syrup Pump Feedback | Medio | DI_SyrRoom_SyrPump_Running |
| I2.4 | DI_PPM303_Ovrld | OVERPRESS.PUMP OVERLOAD - TERMICO POMPA SOVRAPRES. | PPM303 - Product Pump Overload | Alto | |
| I2.5 | DI_PPM306_Ovrld | OVERPRESS.PUMP FAULT - AVARIA POMPA SOVRAPRES. | PPM306 - Recirculating Pump Overload | Medio | DI_PPM303_Ovrld |
| I3.5 | DI_UPSAlarm | ALLARME UPS - UPS ALARM | UPS Alarm | Alto | |
| I3.6 | DI_UPSsupply | ALIMENTAZIONE DA BATTERIE - BATTERY POWER SUPPLY | UPS supply OK | Alto | |
| I3.7 | DI_UPSBatteryReady | BATTERIA TAMPONE PRONTA - BUFFER BATTERY READY | UPS Battery ready | Alto | |
| I4.3 | DI_CIP_CIP_Enable | ALARM ENABLING - ABILITAZIONE ALLARME | From CIP Enable | Bajo | DI_Flr_CIP/RinseFiller, DI_CIP_TankFilling |
| I4.4 | DI_MaxTempAlarm | ABILITAZIONE ALLARME - ALARM ENABLING | Electrical Cabinet High Temperature | Bajo | DI_CIP_Fault, DI_Flr_CIP_CleaningAlarm |
| I5.0 | DI_SyrRoom_SyrPump_Running | POMPA SALA SCIROPPI INMARCIA - SYRUPS ROOM PUMP RUN | From Syrup Room - Syrup Pump Running | Alto | |
| I7.1 | DI_Air_InletPress_OK | AIR PRESSURE GAUGE - PRESSOSTATO ARIA | Air Pressure Switch | Alto | |
| I7.2 | DI_HVP301_Sensor | SENSORE VALVOLA SCARICO SCIROPPO - SYRUP DISCHARGE VALVE SENSOR | GCP301 - Manual Syrup Valve Closed (NO) | Alto | |
| I7.3 | DI_FSS301 | FLOW GAUGE FLUSSOSTATO | FSS301 - Local Cip Return Flow Switch | Alto | |
| Q0.0 | DO_HMIPowerSupply | RIPRISTINO ALIMENTAZIONE HMI - HMI POWER SUPPLY RESTORE | Cut Power to PC | Alto | |
| Q1.0 | DO_SyrRoom_SyrupRequest | RICHIESTA SCIROPPO - SYRUP REQUEST | SYRUP ROOM - Syrup Request | Alto | |
| Q1.1 | DO_SyRm_WaterRequest | WATER REQUEST - RICHIESTA ACQUA | To syrup Room Water Request | Alto | |
| Q7.0 | DO_Horn | ALLARME ACUSTICO - ACOUSTIC ALARM | DO_Horn | Alto | |
| Q7.1 | DO_PB_Green_Lamp | MACHINE START - MARCIA MACCHINA | PB Machine Start Lamp | Alto | |
| Q7.2 | DO_Red_Lamp | MACHINE ALARM - ALLARME MACCHINA | DO_Red_Lamp | Alto | |
| Q7.3 | DO_Yellow_Lamp | ROTAT. LAMP - ROT ALLARM | DO_Yellow_Lamp | Alto | |
| Q7.4 | DO_PPN301_Run | COMANDO POMPA ACQUA - WATER PUMP CONTROL | DO_PPN301_SoftStartPower | Alto | |
| Q7.5 | DO_PPP302_Run | SYRUP PUMP CONTROL - COMANDO POMPA SCIROPPO | DO_PPP302_Run | Alto | |
| Q7.6 | DO_PPM303_Run | COMANDO POMPA SOVRAPRESSIONE - OVERPRESSURE PUMP CONTROL | DO_PPM303_Run | Alto | |
| A16.0 | DO_AVN348 | SFIATO SATURATORE | MIX - Deaireator Inlet | Medio | DO_AVN390, DO_AVM346 |
| A16.1 | DO_AVN350 | DEAREAZIONE ACQUA | nan | Bajo | DO_AVN325, DO_AVN349, DO_AVM339 |
| A16.2 | DO_AVM382 | RICIRCOLO PRODOTTO | Mix - Product Recirculation though chiller | Alto | |
| A16.3 | DO_AVN373 | SCARICO DEAREATORE | MIX - Deaireator 2 Drain | Alto | |
| A16.4 | DO_AVN374 | SCARICO SATURATORE | MIX - Deaireators Connection Drain | Alto | |
| A16.5 | DO_EV67_SyrupLineDrain | SCARICO SCIROPPO | MIX - N10_O101_ | Medio | DO_EV71_FillerPrPipeDrai, DO_EV19_2 |
| A16.6 | DO_AVN329 | DIVOSFERA SATURATORE | MIX - Deaireator 2 Tank Spray Ball | Alto | |
| A16.7 | DO_RVN301_Level | TROPPOPIENO DEAREATORE | MIX - Deaireator Level Control | Alto | |
| A17.0 | DO_AVP317_1 | TROPPO PIENO SATURATORE | MIX - CIP To Syrup | Bajo | DO_AVM353, DO_AVM369 |
| A17.1 | DO_AVS336 | SFIATO ARIA POMPA PRODOTTO | MIX - CIP Recirculation | Bajo | DO_AVM342, DO_AVM380 |
| A17.2 | DO_AVP363 | SCARICO SERPENTINA | MIX - Syrup Line In H2O | Bajo | DO_AVS338, DO_AVM312_Deair_Reflux |
| A17.3 | DO_EV03_SyrupLvlCtrl | RICICLO SCIROPPO START-UP | MIX - N10_O06_ | Bajo | DO_AVM380, DO_EV04_SyrupFillUp |
| A17.5 | DO_AVM341 | INTERCETTAZIONE INIETT.CO2 | MIX - CO2 Inlet | Alto | |
| A17.6 | DO_AVN377 | DIVOSFERADEAREATORE | nan | Bajo | DO_AVM327, DO_AVM329 |
| A17.7 | DO_AVS331 | DEAREAZ. ACOUA IN RISCACOUO | MIX - CIP Venturi | Bajo | DO_AVS332, DO_EV66_FillerRinseWater |
| A18.0 | DO_AVS331 | ASPIRAZIONE VENTURI | MIX - CIP Venturi | Alto | |
| A18.1 | DO_AVS332 | LAVAGGIOVENTURI | MIX - CIP Wash Venturi | Alto | |
| A18.2 | DO_AVS333 | INGRESSO SANIFICANTE 1 | MIX - CIP Caustic | Alto | |
| A18.3 | DO_AVS334 | INGRESSO SANIFICANTE 2 | MIX - CIP Acid | Alto | |
| A18.4 | DO_AVS335 | INGRESSO SANIFICANTE 3 | MIX - CIP Peracetic Acid | Alto | |
| A18.5 | DO_AVS336 | RICIRCOLO SANIFICANTE | MIX - CIP Recirculation | Alto | |
| A18.6 | DO_AVS337 | SCARICO SANIFICANTE 1 | MIX - CIP Drain | Alto | |
| A18.7 | DO_AVS338 | SCARICO SCAMBIATORE | MIX - CIP Heater | Alto | |
| A19.1 | DO_EV71_FiRinseSprayBall | DIVOSFERA SCIROPPO | MIX - N10_O105_ | Bajo | DO_EV68_FillerRinseWater, DO_AVN329 |
| A19.2 | DO_EV71_FillerPrPipeDrai | SCARICO TUBO SCIROPPO | MIX - N10_O104_ | Alto | |
| A19.3 | DO_EV72_FlrRinseTankDrai | SCARICO SATURATORE | MIX - N10_O106_ | Alto | |
| A20.0 | DO_AVN347 | GALLEGGIANTE DEAREATORE 1 | MIX - Deaireator Tank Start CO2 Injection 1 | Bajo | DO_AVM339, DO_AVM340 |
| A20.1 | DO_AVM340 | INVASAMENTODEAREATORE1 | MIX - Still Water By-Pass Product Intercept | Bajo | DO_AVM339, DO_AVN347 |
| A20.2 | DO_EV03_SyrupLvlCtrl | GALLEGGIANTE SCIROPPO | MIX - N10_O06_ | Bajo | DO_EV04_SyrupFillUp, DO_RVP303 |
| A20.3 | DO_EV04_SyrupFillUp | INVASAMENTO SCIROPPO | MIX - N10_O07_ | Alto | |
| PEW100 | P_AI_LTM302 | LIVELLO SERBATOIO DI STOCCAGGIO - STORAGE TANK LEVEL | LTM302 - Product Tank Level | Alto | |
| PEW102 | P_AI_PTM304 | SENSORE PRESSIONE SERB.DI STOCCAGGIO - STORAGE TANK PRESSURE SENSOR | PTM304 - Product Tank Pressure | Alto | |
| PEW104 | P_AI_PTF203 | CONTR.PORTATA CO2 PER DEAREAZIONE - AIR VACUUM CO2 FLOW CONTROL | PTF203 - Differential Pressure | Alto | |
| PEW106 | P_AI_PTM308 | CONTROLLO PRESSIONE SERBATOIO CO2 - CO2 TANK PRESSURE CONTROL | PTM308 - PCM306 Infeed Pressure | Alto | |
| PEW108 | P_AI_TTM306 | PRODUCT TEMPERATURE SENSOR - SENSORE TEMPERATURA PRODOTTO | TTM306 - Chiller Temperature | Alto | |
| PEW112 | P_AI_TTN321 | TEMP. H2O DEAREATORE - H2O DEAREATOR TEMP. | TTN321 - Deaireator Temperature | Alto | |
| PEW114 | P_AI_RVN304 | NORGREN PV VLAVE ANALOG - OUTPUT USCITA ANALOGICA VALVOLA NORGREN PV | RVN304 - Deaireation Valve | Alto | |
| PAW100 | P_AO_RMM301 | VALVOLA MOTORIZZATA ACQUA - WATER MOTOR VALVE | RMM301 - Water Flow Control | Alto | |
| PAW102 | P_AO_RMP302 | VALVOLA MOTORIZZATA SCIROPPO - SYRUP MOTOR VALVE | RMP302 - Syrup Flow Control | Alto | |
| PAW104 | P_AO_RMM303 | VALVOLA MOTORIZZATA CO2 - CO2 MOTOR VALVE | RMM303 - Gas Flow Control | Alto | |
| PAW108 | P_AO_PCM306 | AIR VACUUM CO2 FLOW CONTROL - CONTR.PORTATA CO2 PER DEAREAZIONE | PCM306 - Gas Injection Pressure Control | Alto | |
| PAW110 | P_AO_RVM319 | PRODUCT TEMPERATURE REGULATION - REGOLAZIONE TEMPERATURA PRODOTTO | RVM319 - Chiller Temperature control | Alto | |
| PAW112 | P_AO_RVS318 | SANIT. TEMP. CONTROL C- ONTROLLO TEMPERATURA SANIFICANTE | RVS318 - Local Cip Heating Valve | Alto | |
| PAW114 | P_AO_RVN304 | USCITA ANALOGICA VALVOLA NORGREN - SP NORGREN SP VALVE ANALOG OUTPUT | RVN304 - Deaireation Valve | Alto | |
| PAW122 | P_AO_RVM301 | CONTROLLO PRESSIONE SERBATOIO CO2 - CO2 TANK PRESSURE CONTROL | RVM301 - Product Tank Pressure Valve | Alto | |
| EW 3080..3084 | P_FTN301_Flow | Volume Flow (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 3100..3104 | P_FTN301_Totalizer | Totalizer Value / Control (PROFIBUS) | MIX - Profibus Variables | Alto | |
| AW 3100..3100 | P_FTN301_Tot_Ctrl | Totalizer Value / Control (PROFIBUS) | MIX - | Alto | |
| EW 2030..2034 | P_FTP302_Flow | Mass Flow (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 2045..2049 | P_FTP302_Density | Density (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 2050..2054 | P_FTP302_Brix | Concentration (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 2055..2059 | P_FTP302_Temp | Temperature (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 2070..2074 | P_FTP302_Totalizer | Totalizer Value / Control (PROFIBUS) | MIX - Profibus Variables | Alto | |
| AW 2070..2070 | P_FTP302_Tot_Ctrl | Totalizer Value / Control (PROFIBUS) | MIX - | Alto | |
| EW 3200..3204 | P_FTM303_Flow | Mass Flow (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 3215..3219 | P_FTM303_Density | Density (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 3225..3229 | P_FTM303_Temperature | Temperature (PROFIBUS) | MIX - Profibus Variables | Alto | |
| EW 3240..3244 | P_FTM303_Totalizer | Totalizer Value / Control (PROFIBUS) | MIX - Profibus Variables | Alto | |
| AW 3240..3240 | P_FTM303_Tot_Ctrl | Totalizer Value / Control (PROFIBUS) | MIX - | Alto | |
| EW 15000..15031 | P_PDS_CO2 | IN128 mPDS5>PLC_4_1 (PROFIBUS) | nan | Medio | P_gMaselli_ProductCO2, P_AI_ProductCO2 |
| EW 15032..15063 | P_PDS_Product_Brix | IN128 mPDS5>PLC_4_2 (PROFIBUS) | nan | Medio | P_gMaselli_ProductBrix, P_FTP302_Brix |
| EW 15064..15095 | P_PDS_Temperature | IN128 mPDS5>PLC_4_3 (PROFIBUS) | nan | Medio | P_gMaselli_ProductTemp, P_FTM303_Temperature |
| EW 15096..15127 | P_PDS_Density | IN128 mPDS5>PLC_4_4 (PROFIBUS) | nan | Medio | P_gMaselli_ProductNumber, P_FTM303_Density |
| AW 15000..15031 | P_PDS_Recipe_Number | OUT128 PLC>mPDS5_4_1 (PROFIBUS) | PDS Recipe Number | Medio | P_gMaselli_RecipeSetNum |
| AW 15032..15063 | P_PDS_Freeze_To_PDS | OUT128 PLC>mPDS5_4_2 (PROFIBUS) | nan | Medio | MaselliHold, P_gMaselli_RecipeSetNumStr |
| AW 15064..15095 | P_PDS_Stop_to_PDS | OUT128 PLC>mPDS5_4_3 (PROFIBUS) | nan | Medio | MaselliSpare |
| AW 15096..15127 | DO_FillerNextRecipe | OUT128 PLC>mPDS5_4_4 (PROFIBUS) | MIX - | Bajo | P_gMaselli_RecipeSetNum |
## Excepciones y Problemas
| IO | Problema Detectado |
| --- | --- |
| I4.3 | Descripción ambigua "ALARM ENABLING" podría corresponder a varias entradas |
| I4.4 | Descripción ambigua "ABILITAZIONE ALLARME" podría corresponder a varias entradas |
| A16.1 | No se encuentra una correspondencia clara para "DEAREAZIONE ACQUA" |
| A17.0 | No hay correspondencia clara para "TROPPO PIENO SATURATORE" |
| A17.1 | No hay correspondencia clara para "SFIATO ARIA POMPA PRODOTTO" |
| A17.6 | No hay correspondencia clara para "DIVOSFERADEAREATORE" |
| A19.1 | Descripción "DIVOSFERA SCIROPPO" sin correspondencia exacta |
| A20.0 | "GALLEGGIANTE DEAREATORE 1" sin correspondencia exacta |
| A20.1 | "INVASAMENTODEAREATORE1" sin correspondencia clara |
| A20.2 | "GALLEGGIANTE SCIROPPO" sin correspondencia exacta |

View File

@ -0,0 +1,5 @@
Name;Path;Data Type;Logical Address;Comment;Hmi Visible;Hmi Accessible;Hmi Writeable;Typeobject ID;Version ID
DI_Emergency_Pilz_On;Inputs;Bool;%E0.5;Pilz Emergency;True;True;True;;
DI_LSN301L;Inputs;Bool;%E0.6;LSN301_L - Deaireator Tank Minimun Level;True;True;True;;
DI_LSM302L;Inputs;Bool;%E1.0;LSM302_L - Product Tank Minimun Level;True;True;True;;
DI_PPN301_SoftStart_Ovrld;Inputs;Bool;%E10.0;PPN301 - Water_Pump_SoftStart_Ovrld;True;True;True;;
1 Name Path Data Type Logical Address Comment Hmi Visible Hmi Accessible Hmi Writeable Typeobject ID Version ID
2 DI_Emergency_Pilz_On Inputs Bool %E0.5 Pilz Emergency True True True
3 DI_LSN301L Inputs Bool %E0.6 LSN301_L - Deaireator Tank Minimun Level True True True
4 DI_LSM302L Inputs Bool %E1.0 LSM302_L - Product Tank Minimun Level True True True
5 DI_PPN301_SoftStart_Ovrld Inputs Bool %E10.0 PPN301 - Water_Pump_SoftStart_Ovrld True True True

View File

@ -0,0 +1,2 @@
Path;BelongsToUnit;Accessibility
Inputs;;
1 Path BelongsToUnit Accessibility
2 Inputs

View File

@ -0,0 +1,6 @@
{
"name": "Exportador de objetos de Tia Portal y procesador de CAx",
"description": "Este conjunto de scripts exporta desde Tia Portal los objetos en fomarto XML y los objetos CAx. Luego se puede generar documentacion desde estos CAx de la periferia IO del PLC exportado.",
"version": "1.0",
"author": "Miguel"
}

View File

@ -0,0 +1,77 @@
### How to work with config setup Example
script_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
)
sys.path.append(script_root)
from backend.script_utils import load_configuration
if __name__ == "__main__":
"""
Load configuration from script_config.json in the current script directory.
Returns:
Dict containing configurations with levels 1, 2, 3 and working_directory
Example usage in scripts:
from script_utils import load_configuration
configs = load_configuration()
level1_config = configs.get("level1", {})
level2_config = configs.get("level2", {})
level3_config = configs.get("level3", {})
working_dir = configs.get("working_directory", "")
""""
configs = load_configuration()
working_directory = configs.get("working_directory")
# Acceder a la configuración específica del grupo
group_config = configs.get("level2", {})
# Leer parámetros con valores por defecto (usando los defaults del esquema como guía)
# Parámetros necesarios para x4
cfg_scl_output_dirname = group_config.get("scl_output_dir", "scl_output")
cfg_xref_output_dirname = group_config.get("xref_output_dir", "xref_output")
cfg_xref_source_subdir = group_config.get("xref_source_subdir", "source")
### Directory structure for Tia Portal scripts
<working_directory>/
├── <Project_Name>_CAx_Export.aml
├── <PLC1_Name>/
│ ├── ProgramBlocks_XML/
│ │ └── ... (archivos XML de bloques)
│ ├── ProgramBlocks_SCL/
│ │ └── ... (archivos SCL de bloques)
│ ├── ProgramBlocks_CR/
│ │ └── ... (archivos XML de referencias cruzadas de bloques)
│ ├── PlcTags/
│ │ └── ... (archivos XML de tablas de tags)
│ ├── PlcTags_CR/
│ │ └── ... (archivos XML de referencias cruzadas de tablas de tags)
│ ├── PlcDataTypes_CR/
│ │ └── ... (archivos XML de referencias cruzadas de UDTs)
│ ├── SystemBlocks_CR/
│ │ └── ...
│ └── SoftwareUnits_CR/
│ └── ...
│ └── Documentation/
│ └── Source
│ └── ... (archivos md de bloques de programa)
│ └── JSON
│ └── ... (archivos JSON temporales)
│ └── xref_calls_tree.md
│ └── xref_db_usage_summary.md
│ └── xref_plc_tags_summary.md
│ └── full_project_representation.md
│ └── <Project_Name>_CAx_Export_Hardware_Tree.md
├── <PLC2_Name>/
│ ├── ProgramBlocks_XML/
│ │ └── ...
│ └── ...
└── ...

View File

@ -0,0 +1,8 @@
{
"x1_agregatetags_to_md.py": {
"display_name": "Convertir Excel Tags a Markdown",
"short_description": "This script converts Excel files containing tags into Markdown tables",
"long_description": "Tener en cuenta que el nombre de los archivo excel determina si en la tabla se asignan los tag a las entradas o las salidas.",
"hidden": false
}
}

View File

@ -0,0 +1,472 @@
import pandas as pd
import re
import os
import shutil
import openpyxl
import sys
from datetime import datetime
def read_markdown_table(file_path):
"""Leer tabla en formato Markdown de Obsidian y convertirla a DataFrame."""
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
# Dividir el contenido en líneas
lines = content.strip().split('\n')
# Encontrar el inicio de la tabla (la primera línea que comienza con '|')
table_start = None
for i, line in enumerate(lines):
if line.strip().startswith('|'):
table_start = i
break
if table_start is None:
print("No se encontró ninguna tabla en el archivo")
return pd.DataFrame()
# Encontrar todas las líneas de la tabla
table_lines = []
for i in range(table_start, len(lines)):
line = lines[i].strip()
if line.startswith('|'):
table_lines.append(line)
elif not line: # Línea vacía podría indicar el final de la tabla
# Si la siguiente línea no comienza con '|', consideramos que es el final de la tabla
if i + 1 < len(lines) and not lines[i + 1].strip().startswith('|'):
break
else:
break # Si no comienza con '|' y no está vacía, es el final de la tabla
if len(table_lines) < 3: # Necesitamos al menos encabezado, separador y una fila de datos
print("La tabla no tiene suficientes filas")
return pd.DataFrame()
# Procesar encabezados
header_line = table_lines[0]
separator_line = table_lines[1]
# Verificar que la segunda línea sea realmente un separador
is_separator = all(cell.strip().startswith(':') or cell.strip().startswith('-')
for cell in separator_line.split('|')[1:-1] if cell.strip())
if not is_separator:
print("Advertencia: La segunda línea no parece ser un separador. Se asume que es parte de los datos.")
separator_idx = None
else:
separator_idx = 1
# Extraer encabezados
header_cells = header_line.split('|')
# Eliminar elementos vacíos al principio y al final
if not header_cells[0].strip():
header_cells = header_cells[1:]
if not header_cells[-1].strip():
header_cells = header_cells[:-1]
headers = [h.strip() for h in header_cells]
print(f"Encabezados detectados: {headers}")
# Procesar filas de datos
data_start_idx = 2 if separator_idx == 1 else 1
data = []
for line in table_lines[data_start_idx:]:
# Dividir la línea por el carácter pipe
cells = line.split('|')
# Eliminar elementos vacíos al principio y al final
if not cells[0].strip():
cells = cells[1:]
if not cells[-1].strip():
cells = cells[:-1]
# Limpiar valores
row_values = [cell.strip() for cell in cells]
# Asegurar que la fila tenga el mismo número de columnas que los encabezados
if len(row_values) != len(headers):
print(f"Advertencia: Fila con {len(row_values)} valores, pero se esperaban {len(headers)}. Ajustando...")
# Intentar ajustar la fila para que coincida con el número de columnas
if len(row_values) < len(headers):
row_values.extend([''] * (len(headers) - len(row_values)))
else:
row_values = row_values[:len(headers)]
data.append(row_values)
# Convertir a DataFrame
df = pd.DataFrame(data, columns=headers)
return df
def create_log_file(log_path):
"""Crear un archivo de log con timestamp."""
log_dir = os.path.dirname(log_path)
if log_dir and not os.path.exists(log_dir):
os.makedirs(log_dir)
with open(log_path, 'w', encoding='utf-8') as log_file:
log_file.write(f"Log de actualización de PLCTags - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
log_file.write("=" * 80 + "\n\n")
return log_path
def log_message(log_path, message):
"""Añadir mensaje al log."""
with open(log_path, 'a', encoding='utf-8') as log_file:
log_file.write(message + "\n")
print(message)
def transform_io_address(address):
"""
Transform IO addresses according to the required format:
- Ixx.x %Exx.x
- Exx.x %Exx.x
- Qxx.x %Axx.x
- Axx.x %Axx.x
- PEWxx %EWxx
- PAWxx %AWxx
"""
if not address or not isinstance(address, str):
return address
address = address.strip()
# Patterns for boolean addresses
if re.match(r'^I(\d+)\.(\d+)$', address):
return re.sub(r'^I(\d+)\.(\d+)$', r'%E\1.\2', address)
elif re.match(r'^E(\d+)\.(\d+)$', address):
return re.sub(r'^E(\d+)\.(\d+)$', r'%E\1.\2', address)
elif re.match(r'^Q(\d+)\.(\d+)$', address):
return re.sub(r'^Q(\d+)\.(\d+)$', r'%A\1.\2', address)
elif re.match(r'^A(\d+)\.(\d+)$', address):
return re.sub(r'^A(\d+)\.(\d+)$', r'%A\1.\2', address)
# Patterns for word addresses
elif re.match(r'^PEW(\d+)$', address):
return re.sub(r'^PEW(\d+)$', r'%EW\1', address)
elif re.match(r'^PAW(\d+)$', address):
return re.sub(r'^PAW(\d+)$', r'%AW\1', address)
# If already in correct format or unknown format, return as is
return address
def update_excel_with_adaptation(excel_path, adaptation_path, output_path=None, log_path=None):
"""
Actualiza el archivo Excel con la información de adaptación.
- Modifica la columna "Logical Address" según las reglas:
1. Si el tag se encuentra en la tabla de adaptación, convierte el formato de IO.
2. Si no se encuentra y tiene formato %E, %A, %EW, %AW, asigna una dirección %M.
Args:
excel_path: Ruta al archivo Excel de tags PLC
adaptation_path: Ruta al archivo de adaptación en Markdown
output_path: Ruta para guardar el Excel actualizado (si es None, sobrescribe el original)
log_path: Ruta para el archivo de log
"""
# Si no se especifica ruta de salida, sobrescribir el archivo original
if output_path is None:
output_path = excel_path
# Si no se especifica ruta de log, crear una por defecto
if log_path is None:
log_dir = os.path.dirname(output_path)
log_filename = f"update_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
log_path = os.path.join(log_dir, log_filename)
# Crear archivo de log
create_log_file(log_path)
log_message(log_path, f"Archivo Excel de entrada: {excel_path}")
log_message(log_path, f"Archivo de adaptación: {adaptation_path}")
log_message(log_path, f"Archivo Excel de salida: {output_path}")
log_message(log_path, "-" * 80)
# Leer el archivo de adaptación
adaptation_df = read_markdown_table(adaptation_path)
# Identificar automáticamente la columna Master TAG
master_tag_col = None
for col in adaptation_df.columns:
# Buscar columnas con "master" y "tag" en el nombre (insensible a mayúsculas)
if "master" in col.lower() and "tag" in col.lower():
master_tag_col = col
break
# Si no encontramos la columna por nombre exacto, intentar con coincidencias parciales
if not master_tag_col:
for col in adaptation_df.columns:
if any(keyword in col.lower() for keyword in ["master", "tag", "name"]):
master_tag_col = col
break
# Si aún no hemos encontrado, verificar el contenido de las columnas
if not master_tag_col and not adaptation_df.empty:
# Buscar columnas que contengan valores que parezcan tags (con formato DI_xxx, DO_xxx, etc.)
for col in adaptation_df.columns:
# Tomar algunas muestras
samples = adaptation_df[col].dropna().astype(str).head(5).tolist()
# Comprobar si alguna muestra coincide con el patrón de Master TAG
tag_pattern = r'^[A-Z]{2,3}_[A-Za-z0-9_]+$'
if any(re.match(tag_pattern, s) for s in samples):
master_tag_col = col
break
if not master_tag_col:
error_msg = "Error: No se encontró la columna 'Master Tag' o similar en el archivo de adaptación"
log_message(log_path, error_msg)
log_message(log_path, f"Columnas disponibles: {adaptation_df.columns.tolist()}")
return False
log_message(log_path, f"Usando columna '{master_tag_col}' para el mapeo de tags")
# Aseguramos que no tenemos filas completamente vacías o solo con Master TAG vacío
adaptation_df = adaptation_df[adaptation_df[master_tag_col].notna() &
(adaptation_df[master_tag_col] != '')]
# Identificar automáticamente la columna IO
io_col = None
for col in adaptation_df.columns:
# Buscar coincidencias exactas
if col.lower() == "io":
io_col = col
break
# Buscar coincidencias parciales
if any(keyword in col.lower() for keyword in ["io", "i/o", "address", "logical"]):
io_col = col
break
# Si aún no encontramos, verificar el contenido de las columnas
if not io_col and not adaptation_df.empty:
# Buscar columnas que contengan valores que parezcan direcciones IO (I0.0, Q1.2, etc.)
for col in adaptation_df.columns:
# Tomar algunas muestras
samples = adaptation_df[col].dropna().astype(str).head(5).tolist()
# Definir patrones para direcciones IO
io_patterns = [
r'^[IQM][0-9]+\.[0-9]+$', # Ejemplo: I0.0, Q1.2
r'^PE[WBD][0-9]+$', # Ejemplo: PEW100
r'^PA[WBD][0-9]+$' # Ejemplo: PAW100
]
# Verificar si alguna muestra coincide con algún patrón
matches = False
for pattern in io_patterns:
if any(re.match(pattern, s) for s in samples):
matches = True
break
if matches:
io_col = col
break
if not io_col:
error_msg = "Error: No se encontró la columna 'IO' o similar en el archivo de adaptación"
log_message(log_path, error_msg)
log_message(log_path, f"Columnas disponibles: {adaptation_df.columns.tolist()}")
return False
log_message(log_path, f"Usando columna '{io_col}' para los valores de IO")
# Eliminar el archivo de salida si ya existe
if os.path.exists(output_path):
try:
os.remove(output_path)
log_message(log_path, f"Archivo de salida existente eliminado: {output_path}")
except Exception as e:
log_message(log_path, f"Error al eliminar archivo existente: {e}")
return False
# Crear una copia exacta del archivo Excel original
try:
shutil.copy2(excel_path, output_path)
log_message(log_path, f"Archivo Excel copiado: {excel_path} -> {output_path}")
except Exception as e:
log_message(log_path, f"Error al copiar el archivo Excel: {e}")
return False
# Abrir el archivo Excel copiado usando openpyxl para preservar estructura
try:
workbook = openpyxl.load_workbook(output_path)
log_message(log_path, f"Archivo Excel abierto correctamente: {output_path}")
log_message(log_path, f"Hojas disponibles: {workbook.sheetnames}")
except Exception as e:
log_message(log_path, f"Error al abrir el archivo Excel: {e}")
return False
# Crear un diccionario de actualización desde el archivo de adaptación
update_dict = {}
unmatched_count = 0
for idx, row in adaptation_df.iterrows():
master_tag = row[master_tag_col]
io_value = row[io_col]
# Convertir a string y limpiar espacios
master_tag_str = str(master_tag).strip() if not pd.isna(master_tag) else ""
io_value_str = str(io_value).strip() if not pd.isna(io_value) else ""
if master_tag_str and io_value_str:
update_dict[master_tag_str] = transform_io_address(io_value_str)
else:
unmatched_count += 1
if unmatched_count > 0:
log_message(log_path, f"Advertencia: {unmatched_count} filas en el archivo de adaptación tenían valores vacíos de Master TAG o IO")
log_message(log_path, f"Tags encontrados en el archivo de adaptación: {len(update_dict)}")
# Inicializar contador para direcciones %M
memory_byte_counter = 3600
memory_bit_counter = 0
# Contador de coincidencias
matched_tags = []
unmatched_adaptation_tags = set(update_dict.keys())
converted_to_memory = []
# Procesar cada hoja
for sheet_name in workbook.sheetnames:
sheet = workbook[sheet_name]
log_message(log_path, f"\nProcesando hoja: {sheet_name}")
# Encontrar la columna "Name" y "Logical Address"
name_col_idx = None
logical_addr_col_idx = None
data_type_col_idx = None
for col_idx, cell in enumerate(sheet[1], 1): # Asumiendo que la primera fila contiene encabezados
cell_value = str(cell.value).lower() if cell.value else ""
if "name" in cell_value:
name_col_idx = col_idx
log_message(log_path, f"Columna 'Name' encontrada en posición {col_idx}")
if "logical address" in cell_value:
logical_addr_col_idx = col_idx
log_message(log_path, f"Columna 'Logical Address' encontrada en posición {col_idx}")
if "data type" in cell_value:
data_type_col_idx = col_idx
log_message(log_path, f"Columna 'Data Type' encontrada en posición {col_idx}")
if name_col_idx is None or logical_addr_col_idx is None:
log_message(log_path, f"No se encontraron las columnas necesarias en la hoja {sheet_name}, omitiendo...")
continue
# Actualizar los valores de "Logical Address"
updates_in_sheet = 0
memory_address_conversions = 0
for row_idx, row in enumerate(sheet.iter_rows(min_row=2), 2): # Comenzando desde la fila 2
name_cell = row[name_col_idx - 1] # Ajuste para índice base 0
logical_addr_cell = row[logical_addr_col_idx - 1] # Ajuste para índice base 0
data_type_cell = row[data_type_col_idx - 1] if data_type_col_idx else None # Puede ser None
tag_name = str(name_cell.value).strip() if name_cell.value else ""
current_address = str(logical_addr_cell.value).strip() if logical_addr_cell.value else ""
data_type = str(data_type_cell.value).strip().lower() if data_type_cell and data_type_cell.value else ""
if not tag_name or not current_address:
continue
# Caso 1: Tag encontrado en el diccionario de adaptación
if tag_name in update_dict:
old_value = logical_addr_cell.value
new_value = update_dict[tag_name]
logical_addr_cell.value = new_value
updates_in_sheet += 1
matched_tags.append(tag_name)
if tag_name in unmatched_adaptation_tags:
unmatched_adaptation_tags.remove(tag_name)
log_message(log_path, f" Actualizado: {tag_name} | Viejo valor: {old_value} | Nuevo valor: {new_value}")
# Caso 2: Tag no encontrado en adaptación pero con formato %E, %A, %EW, %AW
elif (current_address.startswith('%E') or
current_address.startswith('%A') or
current_address.startswith('%EW') or
current_address.startswith('%AW')):
old_value = logical_addr_cell.value
# Determinar si es booleano o word
is_boolean = ('bool' in data_type) or ('.') in current_address
if is_boolean:
# Para boolean, usamos formato %M byte.bit
new_value = f"%M{memory_byte_counter}.{memory_bit_counter}"
memory_bit_counter += 1
# Si llegamos a bit 8, pasamos al siguiente byte
if memory_bit_counter > 7:
memory_bit_counter = 0
memory_byte_counter += 1
else:
# Para word, usamos %MW y aumentamos en incrementos de 2
new_value = f"%MW{memory_byte_counter}"
memory_byte_counter += 2
logical_addr_cell.value = new_value
memory_address_conversions += 1
converted_to_memory.append(tag_name)
log_message(log_path, f" Convertido a memoria: {tag_name} | Viejo valor: {old_value} | Nuevo valor: {new_value}")
log_message(log_path, f"Total de actualizaciones en la hoja {sheet_name}: {updates_in_sheet}")
log_message(log_path, f"Total de conversiones a memoria en la hoja {sheet_name}: {memory_address_conversions}")
# Guardar cambios
try:
workbook.save(output_path)
log_message(log_path, f"\nArchivo Excel actualizado guardado: {output_path}")
except Exception as e:
log_message(log_path, f"Error al guardar el archivo Excel: {e}")
return False
# Mostrar resumen
unique_matched_tags = set(matched_tags)
log_message(log_path, "\n" + "=" * 30 + " RESUMEN " + "=" * 30)
log_message(log_path, f"Total de tags en archivo de adaptación: {len(update_dict)}")
log_message(log_path, f"Total de tags actualizados (coincidencias): {len(unique_matched_tags)}")
log_message(log_path, f"Total de tags convertidos a memoria: {len(converted_to_memory)}")
# Mostrar tags del archivo de adaptación sin coincidencias
if unmatched_adaptation_tags:
log_message(log_path, f"\nTags sin coincidencias ({len(unmatched_adaptation_tags)}):")
for tag in sorted(unmatched_adaptation_tags):
log_message(log_path, f" - {tag} -> {update_dict[tag]}")
return True
if __name__ == "__main__":
# Rutas de archivos predeterminadas
adaptation_table = r"C:\Users\migue\OneDrive\Miguel\Obsidean\Trabajo\VM\04-SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\SAE196 - IO Adapted.md"
tag_from_master_table = r"C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\TAGsIO\PLCTags.xlsx"
output_table = r"C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\TAGsIO\PLCTags_Updated.xlsx" # Crear un nuevo archivo para no sobrescribir el original
log_path = r"update_log.txt" # Ruta para el archivo de log
# Permitir pasar rutas como argumentos desde la línea de comandos
if len(sys.argv) > 1:
tag_from_master_table = sys.argv[1]
if len(sys.argv) > 2:
adaptation_table = sys.argv[2]
if len(sys.argv) > 3:
output_table = sys.argv[3]
if len(sys.argv) > 4:
log_path = sys.argv[4]
# Ejecutar la actualización
result = update_excel_with_adaptation(tag_from_master_table, adaptation_table, output_table, log_path)
if result:
print("\nProceso completado exitosamente.")
print(f"Se ha generado un archivo log en: {log_path}")
else:
print("\nHubo errores durante el proceso.")
print(f"Consulte el archivo log para más detalles: {log_path}")

View File

@ -0,0 +1,193 @@
"""
convert Excel Tags to md : This script converts Excel files containing tags into Markdown tables.
"""
# Standard library imports
import os
import sys
import glob
# Third-party imports
try:
import pandas as pd
except ImportError:
print(
"Error: La librería 'pandas' no está instalada. Por favor, instálala con 'pip install pandas openpyxl'."
)
sys.exit(1)
# Determine script_root and add to sys.path for custom module import
# This structure assumes x1_agregatetags_to_md.py is located at:
# d:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\ObtainIOFromProjectTia\x1_agregatetags_to_md.py
# leading to script_root = d:\Proyectos\Scripts\ParamManagerScripts
try:
current_script_path = os.path.abspath(__file__)
script_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(current_script_path)))
)
if script_root not in sys.path:
sys.path.append(script_root)
from backend.script_utils import load_configuration
except ImportError:
print(
"Error: No se pudo importar 'load_configuration' desde 'backend.script_utils'."
)
sys.exit(1)
except NameError: # __file__ is not defined
print(
"Error: __file__ no está definido. Este script podría no estar ejecutándose en un entorno Python estándar."
)
sys.exit(1)
def convert_excel_to_markdown_tables():
"""
Busca archivos Excel en el working_directory, los convierte a tablas Markdown
y los consolida en un único archivo .md.
"""
try:
configs = load_configuration()
working_directory = configs.get("working_directory")
if not working_directory:
print("Error: 'working_directory' no se encontró en la configuración.")
return
if not os.path.isdir(working_directory):
print(
f"Error: El directorio de trabajo '{working_directory}' no existe o no es un directorio."
)
return
except Exception as e:
print(f"Error al cargar la configuración: {e}")
return
working_directory_abs = os.path.abspath(working_directory)
print(f"Usando directorio de trabajo: {working_directory_abs}")
excel_files = glob.glob(os.path.join(working_directory_abs, "*.xlsx"))
if not excel_files:
print(f"No se encontraron archivos Excel (.xlsx) en {working_directory_abs}.")
return
output_md_filename = "IO Tags consolidated.md"
output_md_filepath_abs = os.path.join(working_directory_abs, output_md_filename)
markdown_content = []
# Definición de las columnas y sus anchos para el formato Markdown (basado en el ejemplo)
# Nombres de las columnas en la tabla Markdown
md_header_names = ["Master Tag", "Type", "Data Type", "Description"]
# Anchos de las celdas de contenido (espacio disponible para el texto)
col_widths = {"Master Tag": 32, "Type": 6, "Data Type": 9, "Description": 71}
# Crear el encabezado y separador Markdown una vez, ya que se repite para cada tabla
header_parts = [f" {name:<{col_widths[name]}} " for name in md_header_names]
markdown_table_header = f"|{'|'.join(header_parts)}|"
separator_parts = [f" {'-'*col_widths[name]} " for name in md_header_names]
markdown_table_separator = f"|{'|'.join(separator_parts)}|"
for excel_file_path in excel_files:
excel_filename = os.path.basename(excel_file_path)
print(f"Procesando archivo Excel: {excel_filename}...")
# Determinar si el nombre del archivo sugiere un tipo específico ("Input" o "Output")
filename_suggested_type = None
if "input" in excel_filename.lower():
filename_suggested_type = "Input"
elif (
"output" in excel_filename.lower()
): # elif para dar precedencia a "input" si ambos estuvieran
filename_suggested_type = "Output"
try:
table_title = os.path.splitext(excel_filename)[0]
markdown_content.append(f"## {table_title}\n")
markdown_content.append(markdown_table_header)
markdown_content.append(markdown_table_separator)
df = pd.read_excel(excel_file_path, sheet_name=0)
# Columnas esperadas en el archivo Excel
excel_col_master_tag = "Name"
excel_col_original_type = "Path" # Columna del Excel que originalmente contiene el tipo (ej. "Inputs")
excel_col_data_type = "Data Type"
excel_col_comment = "Comment" # Columna para la descripción
excel_required_cols = [
excel_col_master_tag,
excel_col_original_type, # Requerida como fuente base o fallback para el tipo
excel_col_data_type,
excel_col_comment,
]
missing_cols = [col for col in excel_required_cols if col not in df.columns]
if missing_cols:
msg = f"*Archivo '{excel_filename}' omitido debido a columnas faltantes: {', '.join(missing_cols)}*\n"
print(f"Advertencia: {msg.strip()}")
markdown_content.append(msg)
continue
for _, row in df.iterrows():
master_tag = str(row.get(excel_col_master_tag, ""))
# Determinar el valor final para la columna 'Type' en Markdown
# Por defecto, tomar de la columna 'Path' (excel_col_original_type) del Excel
tag_type_for_md = str(row.get(excel_col_original_type, "N/A"))
# Si el nombre del archivo sugiere un tipo ("Input" o "Output"), este tiene precedencia
if filename_suggested_type:
tag_type_for_md = filename_suggested_type
data_type = str(row.get(excel_col_data_type, ""))
comment_text = str(row.get(excel_col_comment, ""))
description = (
f'"{comment_text}"' # Descripción tomada de la columna "Comment"
)
master_tag_cell = f"{master_tag:<{col_widths['Master Tag']}.{col_widths['Master Tag']}}"
type_cell = (
f"{tag_type_for_md:<{col_widths['Type']}.{col_widths['Type']}}"
)
data_type_cell = (
f"{data_type:<{col_widths['Data Type']}.{col_widths['Data Type']}}"
)
description_cell = f"{description:<{col_widths['Description']}.{col_widths['Description']}}"
md_row = f"| {master_tag_cell} | {type_cell} | {data_type_cell} | {description_cell} |"
markdown_content.append(md_row)
markdown_content.append("\n") # Espacio después de cada tabla
except FileNotFoundError:
msg = f"*Error procesando '{excel_filename}': Archivo no encontrado.*\n"
print(f"Error: {msg.strip()}")
markdown_content.append(msg)
except pd.errors.EmptyDataError:
msg = f"*Archivo '{excel_filename}' omitido por estar vacío.*\n"
print(f"Advertencia: {msg.strip()}")
markdown_content.append(msg)
except Exception as e:
msg = f"*Error procesando '{excel_filename}': {e}*\n"
print(f"Error: {msg.strip()}")
markdown_content.append(msg)
if markdown_content:
try:
with open(output_md_filepath_abs, "w", encoding="utf-8") as f:
f.write("\n".join(markdown_content))
print(
f"¡Éxito! Archivos Excel convertidos a Markdown en: {output_md_filepath_abs}"
)
except IOError as e:
print(
f"Error al escribir el archivo Markdown '{output_md_filepath_abs}': {e}"
)
else:
print("No se generó contenido para el archivo Markdown.")
if __name__ == "__main__":
convert_excel_to_markdown_tables()

View File

@ -0,0 +1,511 @@
"""
convert Markdown tables from adapted IO to Excel for import into TIA Portal
"""
import pandas as pd
import openpyxl
import re
import os
import sys
import tkinter as tk
from tkinter import filedialog, messagebox
from datetime import datetime
# Determine script_root and add to sys.path for custom module import
try:
current_script_path = os.path.abspath(__file__)
script_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(current_script_path)))
)
if script_root not in sys.path:
sys.path.append(script_root)
from backend.script_utils import load_configuration
except ImportError:
print(
"Error: No se pudo importar 'load_configuration' desde 'backend.script_utils'."
)
sys.exit(1)
except NameError: # __file__ is not defined
print(
"Error: __file__ no está definido. Este script podría no estar ejecutándose en un entorno Python estándar."
)
sys.exit(1)
def read_markdown_table(file_path):
"""Leer tabla en formato Markdown y convertirla a DataFrame."""
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
# Dividir el contenido en líneas
lines = content.strip().split('\n')
# Encontrar el inicio de la tabla (primera línea que comienza con '|')
table_start = None
for i, line in enumerate(lines):
if line.strip().startswith('|'):
table_start = i
break
if table_start is None:
print("No se encontró ninguna tabla en el archivo")
return pd.DataFrame()
# Encontrar todas las líneas de la tabla
table_lines = []
for i in range(table_start, len(lines)):
line = lines[i].strip()
if line.startswith('|'):
table_lines.append(line)
elif not line: # Línea vacía podría indicar el final de la tabla
if i + 1 < len(lines) and not lines[i + 1].strip().startswith('|'):
break
else:
break # Si no comienza con '|' y no está vacía, es el final de la tabla
if len(table_lines) < 3: # Necesitamos al menos encabezado, separador y una fila de datos
print("La tabla no tiene suficientes filas")
return pd.DataFrame()
# Procesar encabezados
header_line = table_lines[0]
separator_line = table_lines[1]
# Verificar que la segunda línea sea realmente un separador
is_separator = all(cell.strip().startswith(':') or cell.strip().startswith('-')
for cell in separator_line.split('|')[1:-1] if cell.strip())
if not is_separator:
print("Advertencia: La segunda línea no parece ser un separador. Se asume que es parte de los datos.")
separator_idx = None
else:
separator_idx = 1
# Extraer encabezados
header_cells = header_line.split('|')
# Eliminar elementos vacíos al principio y al final
if not header_cells[0].strip():
header_cells = header_cells[1:]
if not header_cells[-1].strip():
header_cells = header_cells[:-1]
headers = [h.strip() for h in header_cells]
print(f"Encabezados detectados: {headers}")
# Procesar filas de datos
data_start_idx = 2 if separator_idx == 1 else 1
data = []
for line in table_lines[data_start_idx:]:
# Dividir la línea por el carácter pipe
cells = line.split('|')
# Eliminar elementos vacíos al principio y al final
if not cells[0].strip():
cells = cells[1:]
if not cells[-1].strip():
cells = cells[:-1]
# Limpiar valores
row_values = [cell.strip() for cell in cells]
# Asegurar que la fila tenga el mismo número de columnas que los encabezados
if len(row_values) != len(headers):
print(f"Advertencia: Fila con {len(row_values)} valores, pero se esperaban {len(headers)}. Ajustando...")
# Intentar ajustar la fila para que coincida con el número de columnas
if len(row_values) < len(headers):
row_values.extend([''] * (len(headers) - len(row_values)))
else:
row_values = row_values[:len(headers)]
data.append(row_values)
# Convertir a DataFrame
df = pd.DataFrame(data, columns=headers)
return df
def create_log_file(output_dir):
"""Crear un archivo de log con timestamp."""
log_filename = f"update_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
log_path = os.path.join(output_dir, log_filename)
with open(log_path, 'w', encoding='utf-8') as log_file:
log_file.write(f"Log de actualización de PLCTags - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
log_file.write("=" * 80 + "\n\n")
return log_path
def log_message(log_path, message):
"""Añadir mensaje al log."""
with open(log_path, 'a', encoding='utf-8') as log_file:
log_file.write(message + "\n")
print(message)
def transform_io_address(address):
"""
Transform IO addresses according to the required format:
- Ixx.x %Exx.x
- Exx.x %Exx.x
- Qxx.x %Axx.x
- Axx.x %Axx.x
- PEWxx %EWxx
- PAWxx %AWxx
- EW xx..xx %EWxx (ranges for Profibus)
- AW xx..xx %AWxx (ranges for Profibus)
"""
if not address or not isinstance(address, str):
return address
address = address.strip()
# Handle Profibus ranges (extract the first number before the range)
profibus_match = re.match(r'^(EW|AW)\s+(\d+)\.\..*$', address)
if profibus_match:
prefix, number = profibus_match.groups()
if prefix == 'EW':
return f"%EW{number}"
elif prefix == 'AW':
return f"%AW{number}"
# Patterns for boolean addresses
if re.match(r'^I(\d+)\.(\d+)$', address):
return re.sub(r'^I(\d+)\.(\d+)$', r'%E\1.\2', address)
elif re.match(r'^E(\d+)\.(\d+)$', address):
return re.sub(r'^E(\d+)\.(\d+)$', r'%E\1.\2', address)
elif re.match(r'^Q(\d+)\.(\d+)$', address):
return re.sub(r'^Q(\d+)\.(\d+)$', r'%A\1.\2', address)
elif re.match(r'^A(\d+)\.(\d+)$', address):
return re.sub(r'^A(\d+)\.(\d+)$', r'%A\1.\2', address)
# Patterns for word addresses
elif re.match(r'^PEW(\d+)$', address):
return re.sub(r'^PEW(\d+)$', r'%EW\1', address)
elif re.match(r'^PAW(\d+)$', address):
return re.sub(r'^PAW(\d+)$', r'%AW\1', address)
# If already in correct format or unknown format, return as is
return address
def is_input_tag(tag_name):
"""Determinar si un tag es de entrada basado en su nombre."""
input_prefixes = ['DI_', 'AI_', 'P_AI_', 'P_FT', 'P_CT', 'P_PT', 'P_TT', 'P_g', 'P_PDS_']
for prefix in input_prefixes:
if tag_name.startswith(prefix):
# Excepciones para P_g que pueden ser outputs
if tag_name.startswith('P_g') and ('_VFC_ControlWord' in tag_name or '_Refvalue' in tag_name):
return False
return True
return False
def is_output_tag(tag_name):
"""Determinar si un tag es de salida basado en su nombre."""
output_prefixes = ['DO_', 'AO_', 'P_AO_', 'P_g', 'MaselliHold', 'MaselliSpare']
for prefix in output_prefixes:
if tag_name.startswith(prefix):
return True
# Si comienza con P_g, revisar si tiene '_VFC_ControlWord' o 'Refvalue' que son outputs
if tag_name.startswith('P_g') and ('_VFC_ControlWord' in tag_name or '_VFC_Refvalue' in tag_name):
return True
# Si comienza con P_PDS_, revisar si son outputs específicos
if tag_name.startswith('P_PDS_') and ('_Recipe_Number' in tag_name or '_Freeze_To_PDS' in tag_name or '_Stop_to_PDS' in tag_name):
return True
return False
def is_bit_type(data_type):
"""Determinar si el tipo de dato es un bit (Bool)."""
return data_type.lower() == 'bool'
def update_plc_tags(excel_path, md_path, output_path, log_path):
"""
Actualiza el archivo Excel con la información del archivo Markdown.
Args:
excel_path: Ruta al archivo Excel exportado de TIA Portal
md_path: Ruta al archivo Markdown con la adaptación IO
output_path: Ruta para guardar el Excel actualizado
log_path: Ruta para el archivo de log
"""
log_message(log_path, f"Iniciando proceso de actualización")
log_message(log_path, f"Archivo Excel de entrada: {excel_path}")
log_message(log_path, f"Archivo Markdown de entrada: {md_path}")
log_message(log_path, f"Archivo Excel de salida: {output_path}")
log_message(log_path, "-" * 80)
# Leer el archivo Markdown
md_df = read_markdown_table(md_path)
# Identificar las columnas relevantes en el archivo Markdown
io_col = None
master_tag_col = None
for col in md_df.columns:
col_lower = col.lower()
if col_lower == 'io' or 'address' in col_lower:
io_col = col
elif 'master' in col_lower and 'tag' in col_lower:
master_tag_col = col
if not io_col or not master_tag_col:
log_message(log_path, "ERROR: No se pudieron identificar las columnas necesarias en el archivo Markdown")
return False
log_message(log_path, f"Columna IO: {io_col}")
log_message(log_path, f"Columna Master Tag: {master_tag_col}")
# Crear un diccionario de mapeo IO desde el Markdown
io_mapping = {}
for _, row in md_df.iterrows():
master_tag = str(row[master_tag_col]).strip()
io_value = str(row[io_col]).strip()
if master_tag and io_value and master_tag != 'nan' and io_value != 'nan':
io_mapping[master_tag] = transform_io_address(io_value)
log_message(log_path, f"Tags mapeados en el archivo Markdown: {len(io_mapping)}")
# Cargar el archivo Excel
try:
# Usar openpyxl para mantener la estructura del Excel
workbook = openpyxl.load_workbook(excel_path)
log_message(log_path, f"Archivo Excel cargado: {excel_path}")
log_message(log_path, f"Hojas disponibles: {workbook.sheetnames}")
except Exception as e:
log_message(log_path, f"ERROR: No se pudo cargar el archivo Excel: {e}")
return False
# Inicializar contadores para direcciones de memoria
input_mem_byte = 3600
input_mem_bit = 0
output_mem_byte = 3800
output_mem_bit = 0
# Estadísticas
total_tags = 0
updated_tags = 0
relocated_to_inputs = 0
relocated_to_outputs = 0
relocated_to_not_in_hardware_inputs = 0
relocated_to_not_in_hardware_outputs = 0
assigned_memory_addresses = 0
# Procesamos la hoja principal (asumimos que es la primera)
if len(workbook.sheetnames) > 0:
sheet = workbook[workbook.sheetnames[0]]
# Encontrar las columnas relevantes
name_col = None
path_col = None
data_type_col = None
logical_address_col = None
for col_idx, cell in enumerate(sheet[1], 1):
cell_value = str(cell.value).strip() if cell.value else ""
if cell_value.lower() == "name":
name_col = col_idx
elif cell_value.lower() == "path":
path_col = col_idx
elif cell_value.lower() == "data type":
data_type_col = col_idx
elif cell_value.lower() == "logical address":
logical_address_col = col_idx
if not all([name_col, path_col, data_type_col, logical_address_col]):
log_message(log_path, "ERROR: No se encontraron todas las columnas necesarias en el Excel")
return False
# Convertir a índices base 0 para openpyxl
name_col -= 1
path_col -= 1
data_type_col -= 1
logical_address_col -= 1
# Recorrer todas las filas (excluyendo la primera que es el encabezado)
for row_idx, row in enumerate(sheet.iter_rows(min_row=2), 2):
name_cell = row[name_col]
path_cell = row[path_col]
data_type_cell = row[data_type_col]
logical_address_cell = row[logical_address_col]
tag_name = str(name_cell.value).strip() if name_cell.value else ""
path = str(path_cell.value).strip() if path_cell.value else ""
data_type = str(data_type_cell.value).strip() if data_type_cell.value else ""
# Verificar si el tag debe ser procesado (está en los paths relevantes)
relevant_paths = [
"Inputs",
"Outputs",
"IO Not in Hardware\\InputsMaster",
"IO Not in Hardware\\OutputsMaster"
]
if path in relevant_paths:
total_tags += 1
# Verificar si el tag está en el mapeo de IO
if tag_name in io_mapping:
old_address = logical_address_cell.value
new_address = io_mapping[tag_name]
logical_address_cell.value = new_address
# Actualizar el path según el tipo de señal
if new_address.startswith("%E"):
path_cell.value = "Inputs"
relocated_to_inputs += 1
elif new_address.startswith("%A"):
path_cell.value = "Outputs"
relocated_to_outputs += 1
updated_tags += 1
log_message(log_path, f"Actualizado: {tag_name} | Viejo valor: {old_address} | Nuevo valor: {new_address} | Path: {path_cell.value}")
# Si no está en el mapeo, asignar dirección de memoria según el tipo
else:
is_input = is_input_tag(tag_name)
is_output = is_output_tag(tag_name)
is_bit = is_bit_type(data_type)
# Para entradas
if is_input and not is_output:
path_cell.value = "IO Not in Hardware\\InputsMaster"
relocated_to_not_in_hardware_inputs += 1
if is_bit:
new_address = f"%M{input_mem_byte}.{input_mem_bit}"
input_mem_bit += 1
if input_mem_bit > 7:
input_mem_bit = 0
input_mem_byte += 1
else:
new_address = f"%MW{input_mem_byte}"
input_mem_byte += 2
# Para salidas
elif is_output:
path_cell.value = "IO Not in Hardware\\OutputsMaster"
relocated_to_not_in_hardware_outputs += 1
if is_bit:
new_address = f"%M{output_mem_byte}.{output_mem_bit}"
output_mem_bit += 1
if output_mem_bit > 7:
output_mem_bit = 0
output_mem_byte += 1
else:
new_address = f"%MW{output_mem_byte}"
output_mem_byte += 2
# Si no podemos determinar si es entrada o salida por el nombre
# Lo determinamos por el path actual
else:
if "Input" in path:
path_cell.value = "IO Not in Hardware\\InputsMaster"
relocated_to_not_in_hardware_inputs += 1
if is_bit:
new_address = f"%M{input_mem_byte}.{input_mem_bit}"
input_mem_bit += 1
if input_mem_bit > 7:
input_mem_bit = 0
input_mem_byte += 1
else:
new_address = f"%MW{input_mem_byte}"
input_mem_byte += 2
else:
path_cell.value = "IO Not in Hardware\\OutputsMaster"
relocated_to_not_in_hardware_outputs += 1
if is_bit:
new_address = f"%M{output_mem_byte}.{output_mem_bit}"
output_mem_bit += 1
if output_mem_bit > 7:
output_mem_bit = 0
output_mem_byte += 1
else:
new_address = f"%MW{output_mem_byte}"
output_mem_byte += 2
old_address = logical_address_cell.value
logical_address_cell.value = new_address
assigned_memory_addresses += 1
log_message(log_path, f"Asignación memoria: {tag_name} | Viejo valor: {old_address} | Nuevo valor: {new_address} | Path: {path_cell.value}")
# Guardar el archivo actualizado
try:
workbook.save(output_path)
log_message(log_path, f"Archivo Excel guardado: {output_path}")
except Exception as e:
log_message(log_path, f"ERROR: No se pudo guardar el archivo Excel: {e}")
return False
# Mostrar estadísticas
log_message(log_path, "\n" + "=" * 30 + " RESUMEN " + "=" * 30)
log_message(log_path, f"Total de tags procesados: {total_tags}")
log_message(log_path, f"Tags actualizados desde el Markdown: {updated_tags}")
log_message(log_path, f"Tags relocalizados a Inputs: {relocated_to_inputs}")
log_message(log_path, f"Tags relocalizados a Outputs: {relocated_to_outputs}")
log_message(log_path, f"Tags relocalizados a InputsMaster: {relocated_to_not_in_hardware_inputs}")
log_message(log_path, f"Tags relocalizados a OutputsMaster: {relocated_to_not_in_hardware_outputs}")
log_message(log_path, f"Tags con direcciones de memoria asignadas: {assigned_memory_addresses}")
return True
def main():
configs = load_configuration()
working_directory = configs.get("working_directory")
# Crear interfaz para seleccionar archivos
root = tk.Tk()
root.withdraw() # Ocultar ventana principal
# Pedir al usuario que seleccione los archivos
print("Seleccione el archivo Excel exportado de TIA Portal:")
excel_path = filedialog.askopenfilename(
title="Seleccione el archivo Excel exportado de TIA Portal",
filetypes=[("Excel files", "*.xlsx"), ("All files", "*.*")]
)
if not excel_path:
print("No se seleccionó ningún archivo Excel. Saliendo...")
return
print("Seleccione el archivo Markdown con la adaptación IO:")
md_path = filedialog.askopenfilename(
title="Seleccione el archivo Markdown con la adaptación IO",
filetypes=[("Markdown files", "*.md"), ("All files", "*.*")]
)
if not md_path:
print("No se seleccionó ningún archivo Markdown. Saliendo...")
return
# Determinar la ruta de salida (mismo directorio que el Excel, pero con "_Updated" añadido)
excel_dir = os.path.dirname(excel_path)
excel_filename = os.path.basename(excel_path)
excel_name, excel_ext = os.path.splitext(excel_filename)
output_filename = f"{excel_name}_Updated{excel_ext}"
output_path = os.path.join(excel_dir, output_filename)
# Crear archivo de log
log_path = create_log_file(excel_dir)
# Ejecutar el proceso de actualización
success = update_plc_tags(excel_path, md_path, output_path, log_path)
if success:
messagebox.showinfo("Proceso completado",
f"La actualización se ha completado con éxito.\n\n"
f"Archivo de salida: {output_path}\n\n"
f"Archivo de log: {log_path}")
else:
messagebox.showerror("Error",
f"Hubo un error durante el proceso.\n\n"
f"Consulte el archivo de log para más detalles: {log_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,76 @@
Name;Path;Data Type;Logical Address;Comment;Hmi Visible;Hmi Accessible;Hmi Writeable;Typeobject ID;Version ID
DI_Emergency_Pilz_On;Inputs;Bool;%E0.5;Pilz Emergency;True;True;True;;
DI_LSN301L;Inputs;Bool;%E0.6;LSN301_L - Deaireator Tank Minimun Level;True;True;True;;
DI_LSM302L;Inputs;Bool;%E1.0;LSM302_L - Product Tank Minimun Level;True;True;True;;
DI_PPN301_SoftStart_Ovrld;Inputs;Bool;%E10.0;PPN301 - Water_Pump_SoftStart_Ovrld;True;True;True;;
DI_UPSBatteryReady;Inputs;Bool;%E3.7;UPS Battery ready;True;True;True;;
DI_RMM301_Closed;Inputs;Bool;%E1.5;RMM301 - Feedback OFF (VM1WATER);True;True;True;;
DI_RMP302_Closed;Inputs;Bool;%E1.6;RMP302 - Feedback OFF (VM2 SYRUP);True;True;True;;
DI_RMM303_Closed;Inputs;Bool;%E1.7;RMM303 - Feedback OFF (VM3 CO2);True;True;True;;
DI_PPN301_Contactor;Inputs;Bool;%E11.0;PPN301 - Deaireator Pump Feedback;True;True;True;;
DI_PPP302_Ovrld;Inputs;Bool;%E2.2;PPP302 - Syrup Pump Overload;True;True;True;;
DI_PPP302_Contactor;Inputs;Bool;%E2.3;PPP302 - Syrup Pump Feedback;True;True;True;;
DI_PPM303_Ovrld;Inputs;Bool;%E2.4;PPM303 - Product Pump Overload;True;True;True;;
DI_PPM306_Contactor;Inputs;Bool;%E11.3;PPM306 - Recirculating Pump Feedback;True;True;True;;
DI_SyrRoom_SyrPump_Running;Inputs;Bool;%E5.0;From Syrup Room - Syrup Pump Running;True;True;True;;
DI_SyrRoom_WatPumpReady;Inputs;Bool;%E68.1;From Syrup Room - Water Pump Ready;True;True;True;;
DI_CIP_CIP_Rinse;Inputs;Bool;%E60.1;From CIP Running ;True;True;True;;
DI_CIP_Drain;Inputs;Bool;%E60.2;From CIP Drain;True;True;True;;
DI_Air_InletPress_OK;Inputs;Bool;%E7.1;Air Pressure Switch;True;True;True;;
P_AI_LTM302;Inputs;Word;%EW100;LTM302 - Product Tank Level;True;True;True;;
P_AI_PTM304;Inputs;Word;%EW102;PTM304 - Product Tank Pressure;True;True;True;;
P_AI_LTP303;Inputs;Word;%EW808;LTP303 - Syrup Tank Level;True;True;True;;
P_AI_TTN321;Inputs;Word;%EW112;TTN321 - Deaireator Temperature;True;True;True;;
P_AI_PTF203;Inputs;Word;%EW810;PTF203 - Differential Pressure;True;True;True;;
DI_CIP_CIP_Enable;Inputs;Bool;%E60.0;From CIP Enable;True;True;True;;
DI_AVM362_Open;Inputs;Bool;%E102.3;AVM362 - Feedback ON;True;True;True;;
DI_AVM362_Close;Inputs;Bool;%E112.3;AVM362 - Feedback OFF;True;True;True;;
DI_AVM346_Open;Inputs;Bool;%E102.2;AVM346 - Feedback ON;True;True;True;;
DI_AVM346_Close;Inputs;Bool;%E112.2;AVM346 - Feedback OFF;True;True;True;;
DI_UPSAlarm;Inputs;Bool;%E3.5;UPS Alarm;True;True;True;;
DI_UPSsupply;Inputs;Bool;%E3.6;UPS supply OK;True;True;True;;
DI_Emergency_Pressed;Inputs;Bool;%E4.3;Electrical Panel Emergency Button;True;True;True;;
P_AI_PTP338;Inputs;Word;%EW816;PTP338 - Syrup Inlet Pressure;True;True;True;;
P_FTM303_Density;Inputs;Real;%ED3215;MIX - Profibus Variables;True;True;True;;
P_FTM303_Density_State;Inputs;Byte;%EB3219;MIX - Profibus Variables;True;True;True;;
P_FTM303_Flow;Inputs;Real;%ED3200;MIX - Profibus Variables;True;True;True;;
P_FTM303_Flow_State;Inputs;Byte;%EB3204;MIX - Profibus Variables;True;True;True;;
P_FTM303_Temperature;Inputs;Real;%ED3225;MIX - Profibus Variables;True;True;True;;
P_FTM303_Temperature_State;Inputs;Byte;%EB3229;MIX - Profibus Variables;True;True;True;;
P_FTM303_Totalizer;Inputs;Real;%ED3240;MIX - Profibus Variables;True;True;True;;
P_FTM303_Totalizer_State;Inputs;Byte;%EB3244;MIX - Profibus Variables;True;True;True;;
P_FTN301_Flow;Inputs;Real;%ED3080;MIX - Profibus Variables;True;True;True;;
P_FTN301_Flow_State;Inputs;Byte;%EB3084;MIX - Profibus Variables;True;True;True;;
P_FTN301_Totaliz_State;Inputs;Byte;%EB3104;MIX - Profibus Variables;True;True;True;;
P_FTN301_Totalizer;Inputs;Real;%ED3100;MIX - Profibus Variables;True;True;True;;
P_FTP302_Brix;Inputs;Real;%ED2050;MIX - Profibus Variables;True;True;True;;
P_FTP302_Brix_State;Inputs;Byte;%EB2054;MIX - Profibus Variables;True;True;True;;
P_FTP302_Density;Inputs;Real;%ED2045;MIX - Profibus Variables;True;True;True;;
P_FTP302_Density_State;Inputs;Byte;%EB2049;MIX - Profibus Variables;True;True;True;;
P_FTP302_Flow;Inputs;Real;%ED2030;MIX - Profibus Variables;True;True;True;;
P_FTP302_Flow_State;Inputs;Byte;%EB2034;MIX - Profibus Variables;True;True;True;;
P_FTP302_Temp;Inputs;Real;%ED2055;MIX - Profibus Variables;True;True;True;;
P_FTP302_Temp_State;Inputs;Byte;%EB2059;MIX - Profibus Variables;True;True;True;;
P_FTP302_Totaliz_State;Inputs;Byte;%EB2074;MIX - Profibus Variables;True;True;True;;
P_FTP302_Totalizer;Inputs;Real;%ED2070;MIX - Profibus Variables;True;True;True;;
DI_PPM306_Ovrld;Inputs;Bool;%E10.3;PPM306 - Recirculating Pump Overload;True;True;True;;
DI_CIP_CleaningCompleted;Inputs;Bool;%E60.3;CIP - Cip Cleaning Completed;True;True;True;;
P_AI_TTM306;Inputs;Word;%EW108;TTM306 - Chiller Temperature;True;True;True;;
P_AI_RVN304;Inputs;Word;%EW104;RVN304 - Deaireation Valve;True;True;True;;
P_AI_PCM306;Inputs;Word;%EW106;PCM306 - Gas Pressure Injection;True;True;True;;
P_AI_ProductCO2;Inputs;Word;%EW826;Product Analizer - Product CO2;True;True;True;;
P_gPPM303_VFC_StatusWord;Inputs;Word;%EW1640;MIX - Product Pump - Profibus Variables;True;True;True;;
P_PDS_CO2;Inputs;Real;%ED15060;;True;True;True;;
P_PDS_Product_Brix;Inputs;Real;%ED15084;;True;True;True;;
P_PDS_Temperature;Inputs;Real;%ED15104;;True;True;True;;
P_PDS_Density;Inputs;Real;%ED15112;;True;True;True;;
DI_HVP301_Sensor;Inputs;Bool;%E7.2;GCP301 - Manual Syrup Valve Closed (NO);True;True;True;;
DI_PB_HornReset;Inputs;Bool;%E0.1;PB Horn Reset;True;True;True;;
DI_PB_Machine_Start;Inputs;Bool;%E0.4;PB Machine Start;True;True;True;;
DI_PB_Machine_Stop;Inputs;Bool;%E0.3;PB Machine Stop;True;True;True;;
DI_PPN301_Ovrld;Inputs;Bool;%E2.0;PPN301 - Deaireator Pump Overload;True;True;True;;
DI_AuxVoltage_On;Inputs;Bool;%E0.0;Electrical Panel Restored;True;True;True;;
DI_AlarmReset;Inputs;Bool;%E0.2;PB Machine Reset;True;True;True;;
P_AI_RVM301;Inputs;Word;%EW114;RVM301 - Product Tank Pressure Valve;True;True;True;;
DI_Min_Syrup_Level;Inputs;Bool;%E0.7; - Syrup Tank Minimun Level;True;True;True;;
DI_FSS301;Inputs;Bool;%E7.3;FSS301 - Local Cip Return Flow Switch;True;True;True;;
1 Name Path Data Type Logical Address Comment Hmi Visible Hmi Accessible Hmi Writeable Typeobject ID Version ID
2 DI_Emergency_Pilz_On Inputs Bool %E0.5 Pilz Emergency True True True
3 DI_LSN301L Inputs Bool %E0.6 LSN301_L - Deaireator Tank Minimun Level True True True
4 DI_LSM302L Inputs Bool %E1.0 LSM302_L - Product Tank Minimun Level True True True
5 DI_PPN301_SoftStart_Ovrld Inputs Bool %E10.0 PPN301 - Water_Pump_SoftStart_Ovrld True True True
6 DI_UPSBatteryReady Inputs Bool %E3.7 UPS Battery ready True True True
7 DI_RMM301_Closed Inputs Bool %E1.5 RMM301 - Feedback OFF (VM1WATER) True True True
8 DI_RMP302_Closed Inputs Bool %E1.6 RMP302 - Feedback OFF (VM2 SYRUP) True True True
9 DI_RMM303_Closed Inputs Bool %E1.7 RMM303 - Feedback OFF (VM3 CO2) True True True
10 DI_PPN301_Contactor Inputs Bool %E11.0 PPN301 - Deaireator Pump Feedback True True True
11 DI_PPP302_Ovrld Inputs Bool %E2.2 PPP302 - Syrup Pump Overload True True True
12 DI_PPP302_Contactor Inputs Bool %E2.3 PPP302 - Syrup Pump Feedback True True True
13 DI_PPM303_Ovrld Inputs Bool %E2.4 PPM303 - Product Pump Overload True True True
14 DI_PPM306_Contactor Inputs Bool %E11.3 PPM306 - Recirculating Pump Feedback True True True
15 DI_SyrRoom_SyrPump_Running Inputs Bool %E5.0 From Syrup Room - Syrup Pump Running True True True
16 DI_SyrRoom_WatPumpReady Inputs Bool %E68.1 From Syrup Room - Water Pump Ready True True True
17 DI_CIP_CIP_Rinse Inputs Bool %E60.1 From CIP Running True True True
18 DI_CIP_Drain Inputs Bool %E60.2 From CIP Drain True True True
19 DI_Air_InletPress_OK Inputs Bool %E7.1 Air Pressure Switch True True True
20 P_AI_LTM302 Inputs Word %EW100 LTM302 - Product Tank Level True True True
21 P_AI_PTM304 Inputs Word %EW102 PTM304 - Product Tank Pressure True True True
22 P_AI_LTP303 Inputs Word %EW808 LTP303 - Syrup Tank Level True True True
23 P_AI_TTN321 Inputs Word %EW112 TTN321 - Deaireator Temperature True True True
24 P_AI_PTF203 Inputs Word %EW810 PTF203 - Differential Pressure True True True
25 DI_CIP_CIP_Enable Inputs Bool %E60.0 From CIP Enable True True True
26 DI_AVM362_Open Inputs Bool %E102.3 AVM362 - Feedback ON True True True
27 DI_AVM362_Close Inputs Bool %E112.3 AVM362 - Feedback OFF True True True
28 DI_AVM346_Open Inputs Bool %E102.2 AVM346 - Feedback ON True True True
29 DI_AVM346_Close Inputs Bool %E112.2 AVM346 - Feedback OFF True True True
30 DI_UPSAlarm Inputs Bool %E3.5 UPS Alarm True True True
31 DI_UPSsupply Inputs Bool %E3.6 UPS supply OK True True True
32 DI_Emergency_Pressed Inputs Bool %E4.3 Electrical Panel Emergency Button True True True
33 P_AI_PTP338 Inputs Word %EW816 PTP338 - Syrup Inlet Pressure True True True
34 P_FTM303_Density Inputs Real %ED3215 MIX - Profibus Variables True True True
35 P_FTM303_Density_State Inputs Byte %EB3219 MIX - Profibus Variables True True True
36 P_FTM303_Flow Inputs Real %ED3200 MIX - Profibus Variables True True True
37 P_FTM303_Flow_State Inputs Byte %EB3204 MIX - Profibus Variables True True True
38 P_FTM303_Temperature Inputs Real %ED3225 MIX - Profibus Variables True True True
39 P_FTM303_Temperature_State Inputs Byte %EB3229 MIX - Profibus Variables True True True
40 P_FTM303_Totalizer Inputs Real %ED3240 MIX - Profibus Variables True True True
41 P_FTM303_Totalizer_State Inputs Byte %EB3244 MIX - Profibus Variables True True True
42 P_FTN301_Flow Inputs Real %ED3080 MIX - Profibus Variables True True True
43 P_FTN301_Flow_State Inputs Byte %EB3084 MIX - Profibus Variables True True True
44 P_FTN301_Totaliz_State Inputs Byte %EB3104 MIX - Profibus Variables True True True
45 P_FTN301_Totalizer Inputs Real %ED3100 MIX - Profibus Variables True True True
46 P_FTP302_Brix Inputs Real %ED2050 MIX - Profibus Variables True True True
47 P_FTP302_Brix_State Inputs Byte %EB2054 MIX - Profibus Variables True True True
48 P_FTP302_Density Inputs Real %ED2045 MIX - Profibus Variables True True True
49 P_FTP302_Density_State Inputs Byte %EB2049 MIX - Profibus Variables True True True
50 P_FTP302_Flow Inputs Real %ED2030 MIX - Profibus Variables True True True
51 P_FTP302_Flow_State Inputs Byte %EB2034 MIX - Profibus Variables True True True
52 P_FTP302_Temp Inputs Real %ED2055 MIX - Profibus Variables True True True
53 P_FTP302_Temp_State Inputs Byte %EB2059 MIX - Profibus Variables True True True
54 P_FTP302_Totaliz_State Inputs Byte %EB2074 MIX - Profibus Variables True True True
55 P_FTP302_Totalizer Inputs Real %ED2070 MIX - Profibus Variables True True True
56 DI_PPM306_Ovrld Inputs Bool %E10.3 PPM306 - Recirculating Pump Overload True True True
57 DI_CIP_CleaningCompleted Inputs Bool %E60.3 CIP - Cip Cleaning Completed True True True
58 P_AI_TTM306 Inputs Word %EW108 TTM306 - Chiller Temperature True True True
59 P_AI_RVN304 Inputs Word %EW104 RVN304 - Deaireation Valve True True True
60 P_AI_PCM306 Inputs Word %EW106 PCM306 - Gas Pressure Injection True True True
61 P_AI_ProductCO2 Inputs Word %EW826 Product Analizer - Product CO2 True True True
62 P_gPPM303_VFC_StatusWord Inputs Word %EW1640 MIX - Product Pump - Profibus Variables True True True
63 P_PDS_CO2 Inputs Real %ED15060 True True True
64 P_PDS_Product_Brix Inputs Real %ED15084 True True True
65 P_PDS_Temperature Inputs Real %ED15104 True True True
66 P_PDS_Density Inputs Real %ED15112 True True True
67 DI_HVP301_Sensor Inputs Bool %E7.2 GCP301 - Manual Syrup Valve Closed (NO) True True True
68 DI_PB_HornReset Inputs Bool %E0.1 PB Horn Reset True True True
69 DI_PB_Machine_Start Inputs Bool %E0.4 PB Machine Start True True True
70 DI_PB_Machine_Stop Inputs Bool %E0.3 PB Machine Stop True True True
71 DI_PPN301_Ovrld Inputs Bool %E2.0 PPN301 - Deaireator Pump Overload True True True
72 DI_AuxVoltage_On Inputs Bool %E0.0 Electrical Panel Restored True True True
73 DI_AlarmReset Inputs Bool %E0.2 PB Machine Reset True True True
74 P_AI_RVM301 Inputs Word %EW114 RVM301 - Product Tank Pressure Valve True True True
75 DI_Min_Syrup_Level Inputs Bool %E0.7 - Syrup Tank Minimun Level True True True
76 DI_FSS301 Inputs Bool %E7.3 FSS301 - Local Cip Return Flow Switch True True True

View File

@ -1,13 +1,13 @@
--- Log de Ejecución: x3.py ---
Grupo: ObtainIOFromProjectTia
Directorio de Trabajo: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
Inicio: 2025-05-12 13:56:30
Fin: 2025-05-12 13:56:34
Duración: 0:00:03.887946
Inicio: 2025-05-12 14:24:35
Fin: 2025-05-12 14:24:39
Duración: 0:00:04.165462
Estado: SUCCESS (Código de Salida: 0)
--- SALIDA ESTÁNDAR (STDOUT) ---
--- AML (CAx Export) to Hierarchical JSON and Obsidian MD Converter (v30 - Enhanced Module Info in Hardware Tree) ---
--- AML (CAx Export) to Hierarchical JSON and Obsidian MD Converter (v31.1 - Corrected IO Summary Table Initialization) ---
Using Working Directory for Output: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
Input AML: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export.aml
Output Directory: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
@ -40,8 +40,7 @@ IO upward debug tree written to: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U
Found 1 PLC(s). Generating individual hardware trees...
Generating Hardware Tree for PLC 'PLC' (ID: a48e038f-0bcc-4b48-8373-033da316c62b) at: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\PLC\Documentation\SAE196_c0.2_CAx_Export_Hardware_Tree.md
Markdown summary written to: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\PLC\Documentation\SAE196_c0.2_CAx_Export_Hardware_Tree.md
Markdown summary (including table) written to: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\PLC\Documentation\SAE196_c0.2_CAx_Export_Hardware_Tree.md
Script finished.

View File

@ -1,7 +1,44 @@
### How to work with config setup Example
script_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
)
sys.path.append(script_root)
from backend.script_utils import load_configuration
if __name__ == "__main__":
"""
Load configuration from script_config.json in the current script directory.
Returns:
Dict containing configurations with levels 1, 2, 3 and working_directory
Example usage in scripts:
from script_utils import load_configuration
configs = load_configuration()
level1_config = configs.get("level1", {})
level2_config = configs.get("level2", {})
level3_config = configs.get("level3", {})
working_dir = configs.get("working_directory", "")
""""
configs = load_configuration()
working_directory = configs.get("working_directory")
# Acceder a la configuración específica del grupo
group_config = configs.get("level2", {})
# Leer parámetros con valores por defecto (usando los defaults del esquema como guía)
# Parámetros necesarios para x4
cfg_scl_output_dirname = group_config.get("scl_output_dir", "scl_output")
cfg_xref_output_dirname = group_config.get("xref_output_dir", "xref_output")
cfg_xref_source_subdir = group_config.get("xref_source_subdir", "source")
### Directory structure
### Directory structure for Tia Portal scripts
<working_directory>/
├── <Project_Name>_CAx_Export.aml

View File

@ -441,6 +441,9 @@ def generate_markdown_tree(project_data, md_file_path, target_plc_id):
if plc_info:
plc_name_for_title = plc_info.get('name', target_plc_id)
# v31: Initialize list to store all IO data for the summary table for this PLC
all_plc_io_for_table = []
markdown_lines = [f"# Hardware & IO Summary for PLC: {plc_name_for_title}", ""]
if not plc_info:
@ -529,6 +532,9 @@ def generate_markdown_tree(project_data, md_file_path, target_plc_id):
else:
# --- Display Logic with Sibling IO Aggregation & Aesthetics ---
for node_id, node_addr in other_device_items:
# v31: Initialize list for table data for the current device being processed
current_device_io_for_table = []
node_info = project_data.get("devices", {}).get(node_id)
if not node_info:
markdown_lines.append(
@ -775,6 +781,35 @@ def generate_markdown_tree(project_data, md_file_path, target_plc_id):
siemens_addr = (
f"FMT_ERROR({start_str},{length_str})"
)
# v31: Collect data for the summary table (Corrected Indentation)
current_device_io_for_table.append({
"Network": net_info.get('name', net_id),
"Network Type": net_info.get('type', 'Unknown'),
"Device Address": node_addr,
"Device Name": display_name, # Main device name
"Sub-Device": addr_info.get('module_name','?'), # Module name
"Sub-Device OrderNo": addr_info.get('module_order_number', 'N/A'),
"Sub-Device Type": addr_info.get('module_type_name', 'N/A'),
"IO Type": io_type,
"IO Address": siemens_addr,
"Number of Bits": length_bits,
"SortKey": ( # Add a sort key for the table
net_info.get('name', net_id),
sort_key((node_id, node_addr)), # Reuse the device sort key
(
int(addr_info.get("module_pos", "9999"))
if str(addr_info.get("module_pos", "9999")).isdigit()
else 9999
),
addr_info.get("module_name", ""),
io_type,
(
int(addr_info.get("start", "0"))
if str(addr_info.get("start", "0")).isdigit()
else float("inf")
),
)
})
markdown_lines.append(
f" - `{siemens_addr}` (Len={length_bits} bits)"
@ -811,13 +846,53 @@ def generate_markdown_tree(project_data, md_file_path, target_plc_id):
)
for conn in sorted(list(set(io_conns))):
markdown_lines.append(f" - {conn}")
# v31: Add collected IO for this device to the main list
all_plc_io_for_table.extend(current_device_io_for_table)
markdown_lines.append("") # Spacing
# --- *** END Display Logic *** ---
# --- Add IO Summary Table --- # v31: New section
if all_plc_io_for_table:
markdown_lines.append("\n## IO Summary Table")
markdown_lines.append("")
# Define table headers
headers = [
"Network", "Type", "Address", "Device Name", "Sub-Device",
"OrderNo", "Type", "IO Type", "IO Address", "Number of Bits"
]
markdown_lines.append("| " + " | ".join(headers) + " |")
markdown_lines.append("|-" + "-|-".join(["---"] * len(headers)) + "-|")
# Sort the collected data
sorted_table_data = sorted(all_plc_io_for_table, key=lambda x: x["SortKey"])
# Add rows to the table
for row_data in sorted_table_data:
row = [
row_data.get("Network", "N/A"),
row_data.get("Network Type", "N/A"),
row_data.get("Device Address", "N/A"),
row_data.get("Device Name", "N/A"),
row_data.get("Sub-Device", "N/A"),
row_data.get("Sub-Device OrderNo", "N/A"),
row_data.get("Sub-Device Type", "N/A"),
row_data.get("IO Type", "N/A"),
f"`{row_data.get('IO Address', 'N/A')}`", # Format IO Address as code
row_data.get("Number of Bits", "N/A"),
]
# Escape pipe characters within cell content if necessary (though unlikely for these fields)
row = [str(cell).replace('|', '\\|') for cell in row]
markdown_lines.append("| " + " | ".join(row) + " |")
# --- End Add IO Summary Table ---
try:
# Re-open the file in write mode to include the table at the end
with open(md_file_path, "w", encoding="utf-8") as f:
f.write("\n".join(markdown_lines))
print(f"\nMarkdown summary written to: {md_file_path}")
print(f"Markdown summary (including table) written to: {md_file_path}")
except Exception as e:
print(f"ERROR writing Markdown file {md_file_path}: {e}")
traceback.print_exc()
@ -1039,7 +1114,7 @@ if __name__ == "__main__":
configs = load_configuration()
working_directory = configs.get("working_directory")
script_version = "v30 - Enhanced Module Info in Hardware Tree"
script_version = "v31.1 - Corrected IO Summary Table Initialization" # Updated version
print(
f"--- AML (CAx Export) to Hierarchical JSON and Obsidian MD Converter ({script_version}) ---"
)

View File

@ -164,10 +164,10 @@ def check_skip_status(
if __name__ == "__main__":
configs = load_configuration()
working_directory = configs.get("working_directory")
group_config = configs.get("level2", {})
xml_parser_config = configs.get("level2", {})
# <-- NUEVO: Leer parámetros de configuración para x3, x4, x5 -->
xml_parser_config = configs.get("XML Parser to SCL", {})
# xml_parser_config = configs.get("XML Parser to SCL", {})
cfg_scl_output_dirname = xml_parser_config.get("scl_output_dir", "scl_output")
cfg_xref_output_dirname = xml_parser_config.get("xref_output_dir", "xref_output")
cfg_xref_source_subdir = xml_parser_config.get("xref_source_subdir", "source")

View File

@ -1,36 +1,9 @@
[13:56:30] Iniciando ejecución de x3.py en C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport...
[13:56:30] --- AML (CAx Export) to Hierarchical JSON and Obsidian MD Converter (v30 - Enhanced Module Info in Hardware Tree) ---
[13:56:30] Using Working Directory for Output: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
[13:56:34] Input AML: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export.aml
[13:56:34] Output Directory: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
[13:56:34] Output JSON: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export.hierarchical.json
[13:56:34] Output IO Debug Tree MD: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export_IO_Upward_Debug.md
[13:56:34] Processing AML file: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export.aml
[13:56:34] Pass 1: Found 203 InternalElement(s). Populating device dictionary...
[13:56:34] Pass 2: Identifying PLCs and Networks (Refined v2)...
[13:56:34] Identified Network: PROFIBUS_1 (d645659a-3704-4cd6-b2c8-6165ceeed6ee) Type: Profibus
[13:56:34] Identified Network: ETHERNET_1 (f0b1c852-7dc9-4748-888e-34c60b519a75) Type: Ethernet/Profinet
[13:56:34] Identified PLC: PLC (a48e038f-0bcc-4b48-8373-033da316c62b) - Type: CPU 1516F-3 PN/DP OrderNo: 6ES7 516-3FP03-0AB0
[13:56:34] Pass 3: Processing InternalLinks (Robust Network Mapping & IO)...
[13:56:34] Found 116 InternalLink(s).
[13:56:34] Mapping Device/Node 'E1' (NodeID:439930b8-1bbc-4cb2-a93b-2eed931f4b12, Addr:10.1.33.11) to Network 'ETHERNET_1'
[13:56:34] --> Associating Network 'ETHERNET_1' with PLC 'PLC' (via Node 'E1' Addr: 10.1.33.11)
[13:56:34] Mapping Device/Node 'P1' (NodeID:904bb0f7-df2d-4c1d-ab65-f45480449db1, Addr:1) to Network 'PROFIBUS_1'
[13:56:34] --> Associating Network 'PROFIBUS_1' with PLC 'PLC' (via Node 'P1' Addr: 1)
[13:56:34] Mapping Device/Node 'PB1' (NodeID:2784bae8-9807-475f-89bd-bcf44282f5f4, Addr:12) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:e9c5f60a-1da2-4c9b-979e-7d03a5b58a44, Addr:20) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:dd7201c2-e127-4a9d-b6ae-7a74a4ffe418, Addr:21) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:d8825919-3a6c-4f95-aef0-62c782cfdb51, Addr:22) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:27d0e31d-46dc-4fdd-ab82-cfb91899a27c, Addr:10) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:d91d5905-aa1a-485e-b4eb-8333cc2133c2, Addr:8) to Network 'PROFIBUS_1'
[13:56:34] Mapping Device/Node 'PB1' (NodeID:0c5dfe06-786d-4ab6-b57c-8dfede56c2aa, Addr:40) to Network 'PROFIBUS_1'
[13:56:34] Data extraction and structuring complete.
[13:56:34] Generating JSON output: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export.hierarchical.json
[13:56:34] JSON data written successfully.
[13:56:34] IO upward debug tree written to: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\SAE196_c0.2_CAx_Export_IO_Upward_Debug.md
[13:56:34] Found 1 PLC(s). Generating individual hardware trees...
[13:56:34] Generating Hardware Tree for PLC 'PLC' (ID: a48e038f-0bcc-4b48-8373-033da316c62b) at: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\PLC\Documentation\SAE196_c0.2_CAx_Export_Hardware_Tree.md
[13:56:34] Markdown summary written to: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\PLC\Documentation\SAE196_c0.2_CAx_Export_Hardware_Tree.md
[13:56:34] Script finished.
[13:56:34] Ejecución de x3.py finalizada (success). Duración: 0:00:03.887946.
[13:56:34] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\ObtainIOFromProjectTia\log_x3.txt
[18:14:57] Iniciando ejecución de x5.py en C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport...
[18:14:57] Usando directorio de trabajo: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport
[18:14:57] Procesando archivo Excel: Inputs PLCTags.xlsx...
[18:14:58] Procesando archivo Excel: InputsMaster PLCTags.xlsx...
[18:14:58] Procesando archivo Excel: Outputs PLCTags.xlsx...
[18:14:58] Procesando archivo Excel: OutputsMaster PLCTags.xlsx...
[18:14:58] ¡Éxito! Archivos Excel convertidos a Markdown en: C:\Trabajo\SIDEL\06 - E5.007363 - Modifica O&U - SAE196 (cip integrato)\Reporte\IOExport\consolidated_excel_tables.md
[18:14:58] Ejecución de x5.py finalizada (success). Duración: 0:00:00.948478.
[18:14:58] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\ObtainIOFromProjectTia\log_x5.txt