Se añadió una nueva sección en la documentación del backend para describir el uso de servicios compartidos, incluyendo ejemplos de implementación para el `ExcelService`, servicios de detección de idioma y servicios de Modelos de Lenguaje Grandes (LLM). Además, se eliminaron los scripts `x2_io_adaptation_script.py` y `x3_code_snippets_generator.py`, que ya no son necesarios, y se actualizaron los logs para reflejar estos cambios.
This commit is contained in:
parent
164667bc2f
commit
7afdbca03a
|
@ -0,0 +1,9 @@
|
|||
# Add directories or file patterns to ignore during indexing (e.g. foo/ or *.csv)
|
||||
.env
|
||||
.env.local
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
.env.development
|
||||
.env.test
|
||||
.env.production
|
|
@ -246,4 +246,165 @@ except FileNotFoundError:
|
|||
print("El archivo 'mi_archivo_legacy.txt' no fue encontrado para el ejemplo.")
|
||||
except Exception as e:
|
||||
print(f"Error al procesar el archivo: {e}")
|
||||
```
|
||||
```
|
||||
|
||||
## 8. Uso de Servicios Compartidos
|
||||
|
||||
El proyecto ofrece una serie de servicios reutilizables en el directorio `services/` para tareas comunes como la manipulación de archivos Excel, detección de idioma o traducción.
|
||||
|
||||
Para utilizar estos servicios en tu script, asegúrate de que el directorio raíz del proyecto esté en el `sys.path`, como se explica en la sección 1 de esta guía.
|
||||
|
||||
### 8.1 Servicio de Excel (`ExcelService`)
|
||||
|
||||
El `ExcelService` (`services/excel/excel_service.py`) facilita la lectura y escritura de archivos Excel, con manejo de reintentos (por si el archivo está abierto) y opciones de formato.
|
||||
|
||||
**Ejemplo de importación y uso:**
|
||||
|
||||
```python
|
||||
# Asegúrate de tener el path raíz configurado
|
||||
# ... (código de configuración de sys.path)
|
||||
|
||||
from services.excel.excel_service import ExcelService
|
||||
|
||||
def main():
|
||||
excel_service = ExcelService()
|
||||
|
||||
# Leer un archivo Excel
|
||||
try:
|
||||
df = excel_service.read_excel("mi_archivo_de_entrada.xlsx")
|
||||
print("Datos cargados exitosamente.")
|
||||
|
||||
# ... procesar el DataFrame ...
|
||||
|
||||
# Guardar el DataFrame con formato personalizado
|
||||
format_options = {
|
||||
'freeze_row': 2,
|
||||
'header_color': 'E6E6E6'
|
||||
}
|
||||
|
||||
excel_service.save_excel(
|
||||
df,
|
||||
"mi_archivo_de_salida.xlsx",
|
||||
sheet_name="Resultados",
|
||||
format_options=format_options
|
||||
)
|
||||
print("Archivo guardado con éxito.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Ocurrió un error al manejar el archivo Excel: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### 8.2 Servicios de Lenguaje
|
||||
|
||||
Los servicios de lenguaje (`services/language/`) permiten detectar el idioma de un texto.
|
||||
|
||||
**Ejemplo de importación y uso:**
|
||||
|
||||
```python
|
||||
# Asegúrate de tener el path raíz configurado
|
||||
# ... (código de configuración de sys.path)
|
||||
|
||||
from services.language.language_factory import LanguageFactory
|
||||
from services.language.language_utils import LanguageUtils
|
||||
|
||||
def main():
|
||||
# Crear el servicio de detección de idioma
|
||||
allowed_languages = LanguageUtils.get_available_languages()
|
||||
detector = LanguageFactory.create_service("langid", allowed_languages=allowed_languages)
|
||||
|
||||
# Detectar idioma de un texto
|
||||
text = "Este es un texto de ejemplo en español."
|
||||
lang, confidence = detector.detect_language(text)
|
||||
|
||||
print(f"Texto: '{text}'")
|
||||
print(f"Idioma detectado: {LanguageUtils.get_language_name(lang)} (código: {lang})")
|
||||
print(f"Confianza: {confidence:.2f}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### 8.3 Servicios de LLM (Modelos de Lenguaje Grandes)
|
||||
|
||||
El proyecto integra una fábrica de servicios (`LLMFactory`) para interactuar con diferentes Modelos de Lenguaje Grandes (LLMs). Esto permite a los scripts aprovechar la IA generativa para tareas como el análisis de código, la generación de descripciones semánticas, etc.
|
||||
|
||||
#### 8.3.1 Configuración de API Keys
|
||||
|
||||
La mayoría de los servicios de LLM requieren una clave de API para funcionar. El sistema gestiona esto de forma centralizada a través de variables de entorno.
|
||||
|
||||
1. **Crear el archivo `.env`**: En el **directorio raíz del proyecto**, crea un archivo llamado `.env` (si aún no existe).
|
||||
|
||||
2. **Añadir las API Keys**: Abre el archivo `.env` y añade las claves para los servicios que planeas utilizar. El sistema cargará estas variables automáticamente al inicio.
|
||||
|
||||
```env
|
||||
# Ejemplo de contenido para el archivo .env
|
||||
# (Solo necesitas añadir las claves de los servicios que vayas a usar)
|
||||
|
||||
OPENAI_API_KEY="sk-..."
|
||||
GROQ_API_KEY="gsk_..."
|
||||
CLAUDE_API_KEY="sk-ant-..."
|
||||
GEMINI_API_KEY="AIzaSy..."
|
||||
GROK_API_KEY="TU_API_KEY_DE_GROK"
|
||||
```
|
||||
|
||||
**Nota**: El servicio `ollama` se ejecuta localmente y no requiere una clave de API.
|
||||
|
||||
#### 8.3.2 Ejemplo de importación y uso
|
||||
|
||||
El siguiente ejemplo muestra cómo un script puede cargar su configuración, inicializar un servicio de LLM y usarlo para generar texto. Este patrón es similar al utilizado en `x3_generate_semantic_descriptions.py`.
|
||||
|
||||
```python
|
||||
# Asegúrate de tener el path raíz configurado
|
||||
# ... (código de configuración de sys.path)
|
||||
|
||||
from services.llm.llm_factory import LLMFactory
|
||||
from backend.script_utils import load_configuration
|
||||
|
||||
def main():
|
||||
# Cargar la configuración del script, que puede incluir qué LLM usar
|
||||
configs = load_configuration()
|
||||
llm_configs = configs.get("llm", {})
|
||||
|
||||
# Obtener el tipo de servicio y otros parámetros del config
|
||||
# Por defecto, usamos 'groq' si no se especifica
|
||||
service_type = llm_configs.get("service", "groq")
|
||||
|
||||
print(f"🤖 Inicializando servicio LLM: {service_type}")
|
||||
|
||||
# Crear una instancia del servicio usando la fábrica
|
||||
# La fábrica se encarga de pasar las API keys desde las variables de entorno
|
||||
llm_service = LLMFactory.create_service(service_type, **llm_configs)
|
||||
|
||||
if not llm_service:
|
||||
print(f"❌ Error: No se pudo crear el servicio LLM '{service_type}'. Abortando.")
|
||||
return
|
||||
|
||||
# Usar el servicio para generar texto
|
||||
try:
|
||||
prompt = "Explica la computación cuántica en una sola frase."
|
||||
print(f"Enviando prompt: '{prompt}'")
|
||||
|
||||
description = llm_service.generate_text(prompt)
|
||||
|
||||
print("\nRespuesta del LLM:")
|
||||
print(description)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Ocurrió un error al contactar al servicio LLM: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
#### 8.3.3 Servicios Disponibles
|
||||
|
||||
La `LLMFactory` soporta los siguientes tipos de servicio (`service_type`):
|
||||
- `openai`
|
||||
- `groq`
|
||||
- `claude`
|
||||
- `gemini`
|
||||
- `grok`
|
||||
- `ollama` (para ejecución local)
|
|
@ -246,4 +246,83 @@ except FileNotFoundError:
|
|||
print("El archivo 'mi_archivo_legacy.txt' no fue encontrado para el ejemplo.")
|
||||
except Exception as e:
|
||||
print(f"Error al procesar el archivo: {e}")
|
||||
```
|
||||
|
||||
## 8. Uso de Servicios Compartidos
|
||||
|
||||
El proyecto ofrece una serie de servicios reutilizables en el directorio `services/` para tareas comunes como la manipulación de archivos Excel, detección de idioma o traducción.
|
||||
|
||||
Para utilizar estos servicios en tu script, asegúrate de que el directorio raíz del proyecto esté en el `sys.path`, como se explica en la sección 1 de esta guía.
|
||||
|
||||
### 8.1 Servicio de Excel (`ExcelService`)
|
||||
|
||||
El `ExcelService` (`services/excel/excel_service.py`) facilita la lectura y escritura de archivos Excel, con manejo de reintentos (por si el archivo está abierto) y opciones de formato.
|
||||
|
||||
**Ejemplo de importación y uso:**
|
||||
|
||||
```python
|
||||
# Asegúrate de tener el path raíz configurado
|
||||
# ... (código de configuración de sys.path)
|
||||
|
||||
from services.excel.excel_service import ExcelService
|
||||
|
||||
def main():
|
||||
excel_service = ExcelService()
|
||||
|
||||
# Leer un archivo Excel
|
||||
try:
|
||||
df = excel_service.read_excel("mi_archivo_de_entrada.xlsx")
|
||||
print("Datos cargados exitosamente.")
|
||||
|
||||
# ... procesar el DataFrame ...
|
||||
|
||||
# Guardar el DataFrame con formato personalizado
|
||||
format_options = {
|
||||
'freeze_row': 2,
|
||||
'header_color': 'E6E6E6'
|
||||
}
|
||||
|
||||
excel_service.save_excel(
|
||||
df,
|
||||
"mi_archivo_de_salida.xlsx",
|
||||
sheet_name="Resultados",
|
||||
format_options=format_options
|
||||
)
|
||||
print("Archivo guardado con éxito.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Ocurrió un error al manejar el archivo Excel: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### 8.2 Servicios de Lenguaje
|
||||
|
||||
Los servicios de lenguaje (`services/language/`) permiten detectar el idioma de un texto.
|
||||
|
||||
**Ejemplo de importación y uso:**
|
||||
|
||||
```python
|
||||
# Asegúrate de tener el path raíz configurado
|
||||
# ... (código de configuración de sys.path)
|
||||
|
||||
from services.language.language_factory import LanguageFactory
|
||||
from services.language.language_utils import LanguageUtils
|
||||
|
||||
def main():
|
||||
# Crear el servicio de detección de idioma
|
||||
allowed_languages = LanguageUtils.get_available_languages()
|
||||
detector = LanguageFactory.create_service("langid", allowed_languages=allowed_languages)
|
||||
|
||||
# Detectar idioma de un texto
|
||||
text = "Este es un texto de ejemplo en español."
|
||||
lang, confidence = detector.detect_language(text)
|
||||
|
||||
print(f"Texto: '{text}'")
|
||||
print(f"Idioma detectado: {LanguageUtils.get_language_name(lang)} (código: {lang})")
|
||||
print(f"Confianza: {confidence:.2f}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
|
@ -1,520 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Script para generar documentación de adaptación de IOs
|
||||
entre TwinCAT y TIA Portal - Proyecto SIDEL
|
||||
|
||||
Autor: Generado automáticamente
|
||||
Proyecto: E5.007560 - Modifica O&U - SAE235
|
||||
"""
|
||||
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Tuple, Optional
|
||||
import argparse
|
||||
from collections import defaultdict
|
||||
|
||||
# Configurar el path al directorio raíz del proyecto
|
||||
script_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
)
|
||||
sys.path.append(script_root)
|
||||
|
||||
# Importar la función de configuración
|
||||
from backend.script_utils import load_configuration
|
||||
|
||||
|
||||
def load_tiaportal_adaptations(working_directory, file_path="IO Adapted.md"):
|
||||
"""Carga las adaptaciones de TIA Portal desde el archivo markdown"""
|
||||
full_file_path = os.path.join(working_directory, file_path)
|
||||
print(f"Cargando adaptaciones de TIA Portal desde: {full_file_path}")
|
||||
|
||||
adaptations = {}
|
||||
|
||||
if not os.path.exists(full_file_path):
|
||||
print(f"⚠️ Archivo {full_file_path} no encontrado")
|
||||
return adaptations
|
||||
|
||||
with open(full_file_path, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# Patrones mejorados para diferentes tipos de IOs
|
||||
patterns = [
|
||||
# Digitales: E0.0, A0.0
|
||||
r"\|\s*([EA]\d+\.\d+)\s*\|\s*([^|]+?)\s*\|",
|
||||
# Analógicos: PEW100, PAW100
|
||||
r"\|\s*(P[EA]W\d+)\s*\|\s*([^|]+?)\s*\|",
|
||||
# Profibus: EW 1640, AW 1640
|
||||
r"\|\s*([EA]W\s+\d+)\s*\|\s*([^|]+?)\s*\|",
|
||||
]
|
||||
|
||||
for pattern in patterns:
|
||||
matches = re.findall(pattern, content, re.MULTILINE)
|
||||
for io_addr, master_tag in matches:
|
||||
io_addr = io_addr.strip()
|
||||
master_tag = master_tag.strip()
|
||||
if io_addr and master_tag and not master_tag.startswith("-"):
|
||||
adaptations[io_addr] = master_tag
|
||||
print(f" 📍 {io_addr} → {master_tag}")
|
||||
|
||||
print(f"✅ Cargadas {len(adaptations)} adaptaciones de TIA Portal")
|
||||
return adaptations
|
||||
|
||||
|
||||
def scan_twincat_definitions(working_directory, directory="TwinCat"):
|
||||
"""Escanea archivos TwinCAT para encontrar definiciones de variables AT %"""
|
||||
full_directory = os.path.join(working_directory, directory)
|
||||
print(f"\n🔍 Escaneando definiciones TwinCAT en: {full_directory}")
|
||||
|
||||
definitions = {}
|
||||
|
||||
if not os.path.exists(full_directory):
|
||||
print(f"⚠️ Directorio {full_directory} no encontrado")
|
||||
return definitions
|
||||
|
||||
# Patrones para definiciones AT %
|
||||
definition_patterns = [
|
||||
# Solo se buscan definiciones activas. Se ignoran las comentadas.
|
||||
# Ejemplo Válido: DO_CIP_DrainCompleted AT %QX2.1 : BOOL ;
|
||||
# Ejemplo a Ignorar: DO_FillerNextRecipe_1 (* AT %QX2.1 *) : BOOL;
|
||||
r"(\w+)\s+AT\s+%([IQ][XWB]\d+(?:\.\d+)?)\s*:\s*(\w+);"
|
||||
]
|
||||
|
||||
for file_path in Path(full_directory).glob("*.scl"):
|
||||
print(f" 📄 Procesando: {file_path.name}")
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
|
||||
content = f.read()
|
||||
|
||||
for pattern in definition_patterns:
|
||||
matches = re.findall(pattern, content, re.MULTILINE | re.IGNORECASE)
|
||||
for var_name, io_addr, data_type in matches:
|
||||
var_name = var_name.strip()
|
||||
io_addr = io_addr.strip()
|
||||
data_type = data_type.strip()
|
||||
|
||||
definitions[var_name] = {
|
||||
"address": io_addr,
|
||||
"type": data_type,
|
||||
"file": file_path.name,
|
||||
"definition_line": content[: content.find(var_name)].count("\n")
|
||||
+ 1,
|
||||
}
|
||||
print(f" 🔗 {var_name} AT %{io_addr} : {data_type}")
|
||||
|
||||
print(f"✅ Encontradas {len(definitions)} definiciones TwinCAT")
|
||||
return definitions
|
||||
|
||||
|
||||
def scan_twincat_usage(working_directory, directory="TwinCat"):
|
||||
"""Escanea archivos TwinCAT para encontrar uso de variables"""
|
||||
full_directory = os.path.join(working_directory, directory)
|
||||
print(f"\n🔍 Escaneando uso de variables TwinCAT en: {full_directory}")
|
||||
|
||||
usage_data = defaultdict(list)
|
||||
|
||||
if not os.path.exists(full_directory):
|
||||
print(f"⚠️ Directorio {full_directory} no encontrado")
|
||||
return usage_data
|
||||
|
||||
for file_path in Path(full_directory).glob("*.scl"):
|
||||
print(f" 📄 Analizando uso en: {file_path.name}")
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
|
||||
lines = f.readlines()
|
||||
|
||||
for line_num, line in enumerate(lines, 1):
|
||||
# Buscar variables que empiecen con DI_, DO_, AI_, AO_
|
||||
var_matches = re.findall(r"\b([DA][IO]_\w+)\b", line)
|
||||
for var_name in var_matches:
|
||||
usage_data[var_name].append(
|
||||
{
|
||||
"file": file_path.name,
|
||||
"line": line_num,
|
||||
"context": line.strip()[:100]
|
||||
+ ("..." if len(line.strip()) > 100 else ""),
|
||||
}
|
||||
)
|
||||
|
||||
print(f"✅ Encontrado uso de {len(usage_data)} variables diferentes")
|
||||
return usage_data
|
||||
|
||||
|
||||
def convert_tia_to_twincat(tia_addr):
|
||||
"""Convierte direcciones TIA Portal a formato TwinCAT"""
|
||||
conversions = []
|
||||
|
||||
# Digitales
|
||||
if re.match(r"^E\d+\.\d+$", tia_addr): # E0.0 → IX0.0
|
||||
twincat_addr = tia_addr.replace("E", "IX")
|
||||
conversions.append(twincat_addr)
|
||||
elif re.match(r"^A\d+\.\d+$", tia_addr): # A0.0 → QX0.0
|
||||
twincat_addr = tia_addr.replace("A", "QX")
|
||||
conversions.append(twincat_addr)
|
||||
|
||||
# Analógicos
|
||||
elif re.match(r"^PEW\d+$", tia_addr): # PEW100 → IW100
|
||||
twincat_addr = tia_addr.replace("PEW", "IW")
|
||||
conversions.append(twincat_addr)
|
||||
elif re.match(r"^PAW\d+$", tia_addr): # PAW100 → QW100
|
||||
twincat_addr = tia_addr.replace("PAW", "QW")
|
||||
conversions.append(twincat_addr)
|
||||
|
||||
# Profibus
|
||||
elif re.match(r"^EW\s+\d+$", tia_addr): # EW 1234 → IB1234
|
||||
addr_num = re.search(r"\d+", tia_addr).group()
|
||||
conversions.append(f"IB{addr_num}")
|
||||
elif re.match(r"^AW\s+\d+$", tia_addr): # AW 1234 → QB1234
|
||||
addr_num = re.search(r"\d+", tia_addr).group()
|
||||
conversions.append(f"QB{addr_num}")
|
||||
|
||||
return conversions
|
||||
|
||||
|
||||
def find_variable_by_address(definitions, target_address):
|
||||
"""Busca variable por dirección exacta"""
|
||||
for var_name, info in definitions.items():
|
||||
if info["address"] == target_address:
|
||||
return var_name, info
|
||||
return None, None
|
||||
|
||||
|
||||
def find_variable_by_name_similarity(definitions, usage_data, master_tag):
|
||||
"""Busca variables por similitud de nombre"""
|
||||
candidates = []
|
||||
|
||||
# Limpiar el master tag para comparación
|
||||
clean_master = re.sub(r"^[DA][IO]_", "", master_tag).lower()
|
||||
|
||||
# Buscar en definiciones
|
||||
for var_name, info in definitions.items():
|
||||
clean_var = re.sub(r"^[DA][IO]_", "", var_name).lower()
|
||||
if clean_master in clean_var or clean_var in clean_master:
|
||||
candidates.append((var_name, info, "definition"))
|
||||
|
||||
# Buscar en uso
|
||||
for var_name in usage_data.keys():
|
||||
clean_var = re.sub(r"^[DA][IO]_", "", var_name).lower()
|
||||
if clean_master in clean_var or clean_var in clean_master:
|
||||
# Intentar encontrar la definición de esta variable
|
||||
var_info = definitions.get(var_name)
|
||||
if not var_info:
|
||||
var_info = {
|
||||
"address": "Unknown",
|
||||
"type": "Unknown",
|
||||
"file": "Not found",
|
||||
}
|
||||
candidates.append((var_name, var_info, "usage"))
|
||||
|
||||
return candidates
|
||||
|
||||
|
||||
def analyze_adaptations(tia_adaptations, twincat_definitions, twincat_usage):
|
||||
"""Analiza las correlaciones entre TIA Portal y TwinCAT"""
|
||||
print(f"\n📊 Analizando correlaciones...")
|
||||
|
||||
results = []
|
||||
matches_found = 0
|
||||
|
||||
for tia_addr, master_tag in tia_adaptations.items():
|
||||
result = {
|
||||
"tia_address": tia_addr,
|
||||
"master_tag": master_tag,
|
||||
"twincat_variable": None,
|
||||
"twincat_address": None,
|
||||
"twincat_type": None,
|
||||
"match_type": None,
|
||||
"definition_file": None,
|
||||
"usage_files": [],
|
||||
"usage_count": 0,
|
||||
"confidence": "Low",
|
||||
}
|
||||
|
||||
# 1. Buscar por conversión directa de dirección
|
||||
twincat_addresses = convert_tia_to_twincat(tia_addr)
|
||||
var_found = False
|
||||
|
||||
for twincat_addr in twincat_addresses:
|
||||
var_name, var_info = find_variable_by_address(
|
||||
twincat_definitions, twincat_addr
|
||||
)
|
||||
if var_name:
|
||||
result.update(
|
||||
{
|
||||
"twincat_variable": var_name,
|
||||
"twincat_address": var_info["address"],
|
||||
"twincat_type": var_info["type"],
|
||||
"match_type": "Address Match",
|
||||
"definition_file": var_info["file"],
|
||||
"confidence": "High",
|
||||
}
|
||||
)
|
||||
var_found = True
|
||||
matches_found += 1
|
||||
break
|
||||
|
||||
# 2. Si no se encontró por dirección, buscar por nombre
|
||||
if not var_found:
|
||||
candidates = find_variable_by_name_similarity(
|
||||
twincat_definitions, twincat_usage, master_tag
|
||||
)
|
||||
if candidates:
|
||||
# Tomar el mejor candidato
|
||||
best_candidate = candidates[0]
|
||||
var_name, var_info, source = best_candidate
|
||||
|
||||
result.update(
|
||||
{
|
||||
"twincat_variable": var_name,
|
||||
"twincat_address": var_info.get("address", "Unknown"),
|
||||
"twincat_type": var_info.get("type", "Unknown"),
|
||||
"match_type": f"Name Similarity ({source})",
|
||||
"definition_file": var_info.get("file", "Unknown"),
|
||||
"confidence": "Medium",
|
||||
}
|
||||
)
|
||||
matches_found += 1
|
||||
|
||||
# 3. Buscar información de uso
|
||||
if result["twincat_variable"]:
|
||||
var_name = result["twincat_variable"]
|
||||
if var_name in twincat_usage:
|
||||
usage_info = twincat_usage[var_name]
|
||||
result["usage_files"] = list(set([u["file"] for u in usage_info]))
|
||||
result["usage_count"] = len(usage_info)
|
||||
|
||||
results.append(result)
|
||||
|
||||
# Log del progreso
|
||||
status = "✅" if result["twincat_variable"] else "❌"
|
||||
print(f" {status} {tia_addr} → {master_tag}")
|
||||
if result["twincat_variable"]:
|
||||
print(
|
||||
f" 🔗 {result['twincat_variable']} AT %{result['twincat_address']}"
|
||||
)
|
||||
if result["usage_count"] > 0:
|
||||
print(
|
||||
f" 📝 Usado en {result['usage_count']} lugares: {', '.join(result['usage_files'])}"
|
||||
)
|
||||
|
||||
print(
|
||||
f"\n🎯 Resumen: {matches_found}/{len(tia_adaptations)} variables correlacionadas ({matches_found/len(tia_adaptations)*100:.1f}%)"
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def create_results_directory(working_directory):
|
||||
"""Crea el directorio de resultados si no existe"""
|
||||
results_dir = Path(working_directory) / "resultados"
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
print(f"📁 Directorio de resultados: {results_dir.absolute()}")
|
||||
return results_dir
|
||||
|
||||
|
||||
def generate_json_output(
|
||||
results, working_directory, output_file="io_adaptation_data.json"
|
||||
):
|
||||
"""Genera archivo JSON con datos estructurados para análisis posterior"""
|
||||
full_output_file = os.path.join(working_directory, "resultados", output_file)
|
||||
print(f"\n📄 Generando archivo JSON: {full_output_file}")
|
||||
|
||||
json_data = {
|
||||
"metadata": {
|
||||
"generated_at": pd.Timestamp.now().isoformat(),
|
||||
"project": "E5.007560 - Modifica O&U - SAE235",
|
||||
"total_adaptations": len(results),
|
||||
"matched_variables": len([r for r in results if r["twincat_variable"]]),
|
||||
"high_confidence": len([r for r in results if r["confidence"] == "High"]),
|
||||
"medium_confidence": len(
|
||||
[r for r in results if r["confidence"] == "Medium"]
|
||||
),
|
||||
},
|
||||
"adaptations": [],
|
||||
}
|
||||
|
||||
for result in results:
|
||||
adaptation = {
|
||||
"tia_portal": {
|
||||
"address": result["tia_address"],
|
||||
"tag": result["master_tag"],
|
||||
},
|
||||
"twincat": {
|
||||
"variable": result["twincat_variable"],
|
||||
"address": result["twincat_address"],
|
||||
"data_type": result["twincat_type"],
|
||||
"definition_file": result["definition_file"],
|
||||
},
|
||||
"correlation": {
|
||||
"match_type": result["match_type"],
|
||||
"confidence": result["confidence"],
|
||||
"found": result["twincat_variable"] is not None,
|
||||
},
|
||||
"usage": {
|
||||
"usage_count": result["usage_count"],
|
||||
"usage_files": result["usage_files"],
|
||||
},
|
||||
}
|
||||
json_data["adaptations"].append(adaptation)
|
||||
|
||||
with open(full_output_file, "w", encoding="utf-8") as f:
|
||||
json.dump(json_data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"✅ Archivo JSON generado: {full_output_file}")
|
||||
|
||||
|
||||
def generate_detailed_report(
|
||||
results, working_directory, output_file="IO_Detailed_Analysis_Report.md"
|
||||
):
|
||||
"""Genera un reporte detallado con tabla markdown"""
|
||||
full_output_file = os.path.join(working_directory, "resultados", output_file)
|
||||
print(f"\n📄 Generando reporte detallado: {full_output_file}")
|
||||
|
||||
with open(full_output_file, "w", encoding="utf-8") as f:
|
||||
f.write("# Reporte Detallado de Análisis de Adaptación IO\n\n")
|
||||
f.write(
|
||||
f"**Fecha de generación:** {pd.Timestamp.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
)
|
||||
|
||||
# Estadísticas
|
||||
total = len(results)
|
||||
matched = len([r for r in results if r["twincat_variable"]])
|
||||
high_conf = len([r for r in results if r["confidence"] == "High"])
|
||||
medium_conf = len([r for r in results if r["confidence"] == "Medium"])
|
||||
|
||||
f.write("## 📊 Estadísticas Generales\n\n")
|
||||
f.write(f"- **Total adaptaciones procesadas:** {total}\n")
|
||||
f.write(f"- **Variables encontradas:** {matched} ({matched/total*100:.1f}%)\n")
|
||||
f.write(f"- **Coincidencias de alta confianza:** {high_conf}\n")
|
||||
f.write(f"- **Coincidencias de media confianza:** {medium_conf}\n\n")
|
||||
|
||||
# Tabla de variables correlacionadas exitosamente
|
||||
f.write("## ✅ Variables Correlacionadas Exitosamente\n\n")
|
||||
matched_results = [r for r in results if r["twincat_variable"]]
|
||||
|
||||
if matched_results:
|
||||
# Encabezado de la tabla
|
||||
f.write(
|
||||
"| TIA Address | TIA Tag | TwinCAT Variable | TwinCAT Address | Tipo | Método | Confianza | Archivo Def. | Uso | Archivos Uso |\n"
|
||||
)
|
||||
f.write(
|
||||
"|-------------|---------|------------------|-----------------|------|--------|-----------|--------------|-----|---------------|\n"
|
||||
)
|
||||
|
||||
# Filas de datos
|
||||
for result in matched_results:
|
||||
usage_files_str = ", ".join(
|
||||
result["usage_files"][:3]
|
||||
) # Limitar a 3 archivos
|
||||
if len(result["usage_files"]) > 3:
|
||||
usage_files_str += "..."
|
||||
|
||||
f.write(
|
||||
f"| {result['tia_address']} | "
|
||||
f"`{result['master_tag']}` | "
|
||||
f"`{result['twincat_variable']}` | "
|
||||
f"`%{result['twincat_address']}` | "
|
||||
f"`{result['twincat_type']}` | "
|
||||
f"{result['match_type']} | "
|
||||
f"{result['confidence']} | "
|
||||
f"{result['definition_file']} | "
|
||||
f"{result['usage_count']} | "
|
||||
f"{usage_files_str} |\n"
|
||||
)
|
||||
|
||||
f.write("\n")
|
||||
|
||||
# Tabla de variables no encontradas
|
||||
f.write("## ❌ Variables No Encontradas\n\n")
|
||||
unmatched_results = [r for r in results if not r["twincat_variable"]]
|
||||
|
||||
if unmatched_results:
|
||||
f.write("| TIA Address | TIA Tag |\n")
|
||||
f.write("|-------------|----------|\n")
|
||||
|
||||
for result in unmatched_results:
|
||||
f.write(f"| {result['tia_address']} | `{result['master_tag']}` |\n")
|
||||
|
||||
f.write(f"\n**Total no encontradas:** {len(unmatched_results)}\n\n")
|
||||
|
||||
# Recomendaciones
|
||||
f.write("## 💡 Recomendaciones\n\n")
|
||||
f.write("1. **Variables de alta confianza** pueden migrarse directamente\n")
|
||||
f.write("2. **Variables de media confianza** requieren verificación manual\n")
|
||||
f.write(
|
||||
"3. **Variables no encontradas** requieren mapeo manual o pueden ser obsoletas\n"
|
||||
)
|
||||
f.write("4. Variables con uso extensivo son prioritarias para la migración\n\n")
|
||||
|
||||
# Resumen por confianza
|
||||
f.write("## 📈 Distribución por Confianza\n\n")
|
||||
f.write("| Nivel de Confianza | Cantidad | Porcentaje |\n")
|
||||
f.write("|--------------------|----------|------------|\n")
|
||||
f.write(f"| Alta | {high_conf} | {high_conf/total*100:.1f}% |\n")
|
||||
f.write(f"| Media | {medium_conf} | {medium_conf/total*100:.1f}% |\n")
|
||||
f.write(
|
||||
f"| No encontradas | {total-matched} | {(total-matched)/total*100:.1f}% |\n"
|
||||
)
|
||||
|
||||
print(f"✅ Reporte detallado generado: {full_output_file}")
|
||||
|
||||
|
||||
def main():
|
||||
print("🚀 Iniciando análisis detallado de adaptación de IOs TwinCAT ↔ TIA Portal")
|
||||
print("=" * 80)
|
||||
|
||||
# Cargar configuración
|
||||
configs = load_configuration()
|
||||
|
||||
# Verificar que se cargó correctamente
|
||||
if not configs:
|
||||
print(
|
||||
"Advertencia: No se pudo cargar la configuración, usando valores por defecto"
|
||||
)
|
||||
working_directory = "./"
|
||||
else:
|
||||
working_directory = configs.get("working_directory", "./")
|
||||
|
||||
# Verificar directorio de trabajo
|
||||
if not os.path.exists(working_directory):
|
||||
print(f"Error: El directorio de trabajo no existe: {working_directory}")
|
||||
return
|
||||
|
||||
print(f"📁 Directorio de trabajo: {working_directory}")
|
||||
|
||||
# Crear directorio de resultados
|
||||
results_dir = create_results_directory(working_directory)
|
||||
|
||||
# Cargar datos
|
||||
tia_adaptations = load_tiaportal_adaptations(working_directory)
|
||||
twincat_definitions = scan_twincat_definitions(working_directory)
|
||||
twincat_usage = scan_twincat_usage(working_directory)
|
||||
|
||||
# Analizar correlaciones
|
||||
results = analyze_adaptations(tia_adaptations, twincat_definitions, twincat_usage)
|
||||
|
||||
# Generar reportes en el directorio de resultados
|
||||
generate_detailed_report(results, working_directory)
|
||||
generate_json_output(results, working_directory)
|
||||
|
||||
# Generar CSV para análisis adicional
|
||||
df = pd.DataFrame(results)
|
||||
csv_file = results_dir / "io_detailed_analysis.csv"
|
||||
df.to_csv(csv_file, index=False, encoding="utf-8")
|
||||
print(f"✅ Datos exportados a CSV: {csv_file}")
|
||||
|
||||
print(f"\n🎉 Análisis completado exitosamente!")
|
||||
print(f"📁 Archivos generados en: {results_dir.absolute()}")
|
||||
print(f" 📄 {results_dir / 'IO_Detailed_Analysis_Report.md'}")
|
||||
print(f" 📄 {results_dir / 'io_adaptation_data.json'}")
|
||||
print(f" 📄 {results_dir / 'io_detailed_analysis.csv'}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
results = main()
|
|
@ -1,315 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Script para generar snippets de código de uso de variables IO
|
||||
entre TwinCAT y TIA Portal - Proyecto SIDEL
|
||||
|
||||
Autor: Generado automáticamente
|
||||
Proyecto: E5.007560 - Modifica O&U - SAE235
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Tuple, Optional
|
||||
import pandas as pd
|
||||
|
||||
# Configurar el path al directorio raíz del proyecto
|
||||
script_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
)
|
||||
sys.path.append(script_root)
|
||||
|
||||
# Importar la función de configuración
|
||||
from backend.script_utils import load_configuration
|
||||
|
||||
|
||||
def load_adaptation_data(working_directory, json_file='io_adaptation_data.json'):
|
||||
"""Carga los datos de adaptación desde el archivo JSON"""
|
||||
full_json_file = os.path.join(working_directory, 'resultados', json_file)
|
||||
print(f"📖 Cargando datos de adaptación desde: {full_json_file}")
|
||||
|
||||
if not os.path.exists(full_json_file):
|
||||
print(f"⚠️ Archivo {full_json_file} no encontrado")
|
||||
return None
|
||||
|
||||
with open(full_json_file, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
|
||||
print(f"✅ Cargados datos de {data['metadata']['total_adaptations']} adaptaciones")
|
||||
return data
|
||||
|
||||
|
||||
def find_variable_usage_in_file(file_path, variable_name, max_occurrences=3):
|
||||
"""Encuentra el uso de una variable en un archivo específico y retorna el contexto"""
|
||||
if not os.path.exists(file_path):
|
||||
return []
|
||||
|
||||
usages = []
|
||||
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Buscar todas las líneas que contienen la variable
|
||||
found_lines = []
|
||||
for line_num, line in enumerate(lines):
|
||||
# Buscar la variable como palabra completa (no como parte de otra palabra)
|
||||
if re.search(rf'\b{re.escape(variable_name)}\b', line):
|
||||
found_lines.append((line_num, line.strip()))
|
||||
if len(found_lines) >= max_occurrences:
|
||||
break
|
||||
|
||||
# Para cada ocurrencia, obtener contexto (línea anterior, actual, siguiente)
|
||||
for line_num, line_content in found_lines:
|
||||
context = {
|
||||
'line_number': line_num + 1, # Convertir a 1-indexado
|
||||
'before': lines[line_num - 1].strip() if line_num > 0 else "",
|
||||
'current': line_content,
|
||||
'after': lines[line_num + 1].strip() if line_num < len(lines) - 1 else ""
|
||||
}
|
||||
usages.append(context)
|
||||
|
||||
except Exception as e:
|
||||
print(f"⚠️ Error leyendo archivo {file_path}: {e}")
|
||||
|
||||
return usages
|
||||
|
||||
|
||||
def find_tia_portal_usage(adaptation, working_directory):
|
||||
"""Busca el uso de variables TIA Portal en archivos markdown"""
|
||||
tia_address = adaptation['tia_portal']['address']
|
||||
tia_tag = adaptation['tia_portal']['tag']
|
||||
|
||||
# Buscar en archivos TIA Portal (principalmente en archivos .md)
|
||||
tia_usages = []
|
||||
|
||||
# Buscar en TiaPortal/ directory
|
||||
tia_portal_dir = Path(working_directory) / 'TiaPortal'
|
||||
if tia_portal_dir.exists():
|
||||
for md_file in tia_portal_dir.glob('*.md'):
|
||||
# Buscar por dirección TIA
|
||||
address_usages = find_variable_usage_in_file(md_file, tia_address, 2)
|
||||
for usage in address_usages:
|
||||
usage['file'] = f"TiaPortal/{md_file.name}"
|
||||
usage['search_term'] = tia_address
|
||||
tia_usages.append(usage)
|
||||
|
||||
# Buscar por tag TIA si es diferente
|
||||
if tia_tag != tia_address:
|
||||
tag_usages = find_variable_usage_in_file(md_file, tia_tag, 1)
|
||||
for usage in tag_usages:
|
||||
usage['file'] = f"TiaPortal/{md_file.name}"
|
||||
usage['search_term'] = tia_tag
|
||||
tia_usages.append(usage)
|
||||
|
||||
# Limitar total de usos TIA
|
||||
if len(tia_usages) >= 3:
|
||||
break
|
||||
|
||||
return tia_usages[:3] # Máximo 3 usos TIA
|
||||
|
||||
|
||||
def find_twincat_usage(adaptation, working_directory):
|
||||
"""Busca el uso de variables TwinCAT en archivos .scl"""
|
||||
if not adaptation['correlation']['found']:
|
||||
return []
|
||||
|
||||
variable_name = adaptation['twincat']['variable']
|
||||
usage_files = adaptation['usage']['usage_files']
|
||||
|
||||
twincat_usages = []
|
||||
|
||||
# Buscar en archivos TwinCAT
|
||||
twincat_dir = Path(working_directory) / 'TwinCat'
|
||||
if twincat_dir.exists():
|
||||
for file_name in usage_files:
|
||||
file_path = twincat_dir / file_name
|
||||
if file_path.exists():
|
||||
usages = find_variable_usage_in_file(file_path, variable_name, 2)
|
||||
for usage in usages:
|
||||
usage['file'] = f"TwinCat/{file_name}"
|
||||
usage['search_term'] = variable_name
|
||||
twincat_usages.append(usage)
|
||||
|
||||
# Limitar por archivo
|
||||
if len(twincat_usages) >= 3:
|
||||
break
|
||||
|
||||
return twincat_usages[:3] # Máximo 3 usos TwinCAT
|
||||
|
||||
|
||||
def generate_code_snippets_report(data, working_directory, output_file='IO_Code_Snippets_Report.md'):
|
||||
"""Genera el reporte con snippets de código"""
|
||||
full_output_file = os.path.join(working_directory, 'resultados', output_file)
|
||||
print(f"\n📄 Generando reporte de snippets: {full_output_file}")
|
||||
|
||||
matched_adaptations = [a for a in data['adaptations'] if a['correlation']['found']]
|
||||
|
||||
with open(full_output_file, 'w', encoding='utf-8') as f:
|
||||
f.write("# Reporte de Snippets de Código - Adaptación IO\n\n")
|
||||
f.write(f"**Fecha de generación:** {pd.Timestamp.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
|
||||
f.write(f"**Proyecto:** {data['metadata']['project']}\n\n")
|
||||
|
||||
f.write("## 📋 Resumen\n\n")
|
||||
f.write(f"- **Variables analizadas:** {len(matched_adaptations)}\n")
|
||||
f.write(f"- **Snippets generados:** Se muestran hasta 3 usos por plataforma\n")
|
||||
f.write(f"- **Formato:** Contexto de 3 líneas (anterior, actual, siguiente)\n\n")
|
||||
|
||||
f.write("---\n\n")
|
||||
|
||||
# Procesar cada adaptación
|
||||
for i, adaptation in enumerate(matched_adaptations, 1):
|
||||
tia_address = adaptation['tia_portal']['address']
|
||||
tia_tag = adaptation['tia_portal']['tag']
|
||||
twincat_var = adaptation['twincat']['variable']
|
||||
twincat_addr = adaptation['twincat']['address']
|
||||
|
||||
print(f" 📝 Procesando {i}/{len(matched_adaptations)}: {tia_address} → {twincat_var}")
|
||||
|
||||
f.write(f"## {i}. {tia_address} → {twincat_var}\n\n")
|
||||
f.write(f"**TIA Portal:** `{tia_tag}` (`{tia_address}`)\n")
|
||||
f.write(f"**TwinCAT:** `{twincat_var}` (`%{twincat_addr}`)\n")
|
||||
f.write(f"**Tipo:** `{adaptation['twincat']['data_type']}`\n\n")
|
||||
|
||||
# Buscar usos en TIA Portal
|
||||
f.write("### 🔵 Uso en TIA Portal\n\n")
|
||||
tia_usages = find_tia_portal_usage(adaptation, working_directory)
|
||||
|
||||
if tia_usages:
|
||||
for j, usage in enumerate(tia_usages):
|
||||
f.write(f"**Uso {j+1}:** [{usage['file']}]({usage['file']}) - Línea {usage['line_number']}\n\n")
|
||||
f.write("```scl\n")
|
||||
if usage['before']:
|
||||
f.write(f"{usage['before']}\n")
|
||||
f.write(f">>> {usage['current']} // ← {usage['search_term']}\n")
|
||||
if usage['after']:
|
||||
f.write(f"{usage['after']}\n")
|
||||
f.write("```\n\n")
|
||||
else:
|
||||
f.write("*No se encontraron usos específicos en archivos TIA Portal.*\n\n")
|
||||
|
||||
# Buscar usos en TwinCAT
|
||||
f.write("### 🟢 Uso en TwinCAT\n\n")
|
||||
twincat_usages = find_twincat_usage(adaptation, working_directory)
|
||||
|
||||
if twincat_usages:
|
||||
for j, usage in enumerate(twincat_usages):
|
||||
f.write(f"**Uso {j+1}:** [{usage['file']}]({usage['file']}) - Línea {usage['line_number']}\n\n")
|
||||
f.write("```scl\n")
|
||||
if usage['before']:
|
||||
f.write(f"{usage['before']}\n")
|
||||
f.write(f">>> {usage['current']} // ← {usage['search_term']}\n")
|
||||
if usage['after']:
|
||||
f.write(f"{usage['after']}\n")
|
||||
f.write("```\n\n")
|
||||
else:
|
||||
f.write("*Variable definida pero no se encontraron usos específicos.*\n\n")
|
||||
|
||||
f.write("---\n\n")
|
||||
|
||||
print(f"✅ Reporte de snippets generado: {full_output_file}")
|
||||
|
||||
|
||||
def generate_summary_statistics(data, working_directory, output_file='IO_Usage_Statistics.md'):
|
||||
"""Genera estadísticas de uso de las variables"""
|
||||
full_output_file = os.path.join(working_directory, 'resultados', output_file)
|
||||
print(f"\n📊 Generando estadísticas de uso: {full_output_file}")
|
||||
|
||||
matched_adaptations = [a for a in data['adaptations'] if a['correlation']['found']]
|
||||
|
||||
# Calcular estadísticas
|
||||
total_usage = sum(a['usage']['usage_count'] for a in matched_adaptations)
|
||||
variables_with_usage = len([a for a in matched_adaptations if a['usage']['usage_count'] > 0])
|
||||
|
||||
# Variables más usadas
|
||||
most_used = sorted(matched_adaptations, key=lambda x: x['usage']['usage_count'], reverse=True)[:10]
|
||||
|
||||
# Archivos más referenciados
|
||||
file_usage = {}
|
||||
for adaptation in matched_adaptations:
|
||||
for file_name in adaptation['usage']['usage_files']:
|
||||
file_usage[file_name] = file_usage.get(file_name, 0) + 1
|
||||
|
||||
top_files = sorted(file_usage.items(), key=lambda x: x[1], reverse=True)[:10]
|
||||
|
||||
with open(full_output_file, 'w', encoding='utf-8') as f:
|
||||
f.write("# Estadísticas de Uso de Variables IO\n\n")
|
||||
f.write(f"**Fecha de generación:** {pd.Timestamp.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
|
||||
|
||||
f.write("## 📊 Resumen General\n\n")
|
||||
f.write(f"- **Variables correlacionadas:** {len(matched_adaptations)}\n")
|
||||
f.write(f"- **Variables con uso documentado:** {variables_with_usage}\n")
|
||||
f.write(f"- **Total de usos encontrados:** {total_usage}\n")
|
||||
f.write(f"- **Promedio de usos por variable:** {total_usage/len(matched_adaptations):.1f}\n\n")
|
||||
|
||||
f.write("## 🔥 Top 10 Variables Más Usadas\n\n")
|
||||
f.write("| Ranking | TIA Address | TwinCAT Variable | Usos | Archivos |\n")
|
||||
f.write("|---------|-------------|------------------|------|----------|\n")
|
||||
|
||||
for i, adaptation in enumerate(most_used, 1):
|
||||
files_str = ', '.join(adaptation['usage']['usage_files'][:3])
|
||||
if len(adaptation['usage']['usage_files']) > 3:
|
||||
files_str += '...'
|
||||
|
||||
f.write(f"| {i} | {adaptation['tia_portal']['address']} | "
|
||||
f"`{adaptation['twincat']['variable']}` | "
|
||||
f"{adaptation['usage']['usage_count']} | {files_str} |\n")
|
||||
|
||||
f.write("\n## 📁 Top 10 Archivos Más Referenciados\n\n")
|
||||
f.write("| Ranking | Archivo | Variables Usadas |\n")
|
||||
f.write("|---------|---------|------------------|\n")
|
||||
|
||||
for i, (file_name, count) in enumerate(top_files, 1):
|
||||
f.write(f"| {i} | `{file_name}` | {count} |\n")
|
||||
|
||||
print(f"✅ Estadísticas de uso generadas: {full_output_file}")
|
||||
|
||||
|
||||
def main():
|
||||
print("🚀 Iniciando generación de snippets de código para adaptación IO")
|
||||
print("=" * 70)
|
||||
|
||||
# Cargar configuración
|
||||
configs = load_configuration()
|
||||
|
||||
# Verificar que se cargó correctamente
|
||||
if not configs:
|
||||
print("Advertencia: No se pudo cargar la configuración, usando valores por defecto")
|
||||
working_directory = "./"
|
||||
else:
|
||||
working_directory = configs.get("working_directory", "./")
|
||||
|
||||
# Verificar directorio de trabajo
|
||||
if not os.path.exists(working_directory):
|
||||
print(f"Error: El directorio de trabajo no existe: {working_directory}")
|
||||
return
|
||||
|
||||
print(f"📁 Directorio de trabajo: {working_directory}")
|
||||
|
||||
# Crear directorio de resultados si no existe
|
||||
results_dir = Path(working_directory) / 'resultados'
|
||||
results_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Cargar datos de adaptación
|
||||
data = load_adaptation_data(working_directory)
|
||||
if not data:
|
||||
print("❌ No se pudieron cargar los datos de adaptación")
|
||||
return
|
||||
|
||||
# Generar reporte de snippets
|
||||
generate_code_snippets_report(data, working_directory)
|
||||
|
||||
# Generar estadísticas de uso
|
||||
generate_summary_statistics(data, working_directory)
|
||||
|
||||
print(f"\n🎉 Generación completada exitosamente!")
|
||||
print(f"📁 Archivos generados en: {results_dir.absolute()}")
|
||||
print(f" 📄 {results_dir / 'IO_Code_Snippets_Report.md'}")
|
||||
print(f" 📄 {results_dir / 'IO_Usage_Statistics.md'}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
290
data/log.txt
290
data/log.txt
|
@ -285,3 +285,293 @@
|
|||
[12:16:16] 📄 TwinCAT_IO_Usage_Snippets.md
|
||||
[12:16:16] Ejecución de x1.5_full_io_documentation.py finalizada (success). Duración: 0:00:08.050593.
|
||||
[12:16:16] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\TwinCat\log_x1.5_full_io_documentation.txt
|
||||
[12:28:26] Iniciando ejecución de x1.5_full_io_documentation.py en C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis...
|
||||
[12:28:27] 🚀 Iniciando documentación completa de IOs de TwinCAT
|
||||
[12:28:27] ================================================================================
|
||||
[12:28:27] 📁 Directorio de trabajo: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis
|
||||
[12:28:27] 📁 Directorio de resultados: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat
|
||||
[12:28:27] 🔍 Escaneando definiciones TwinCAT activas en: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat/scl
|
||||
[12:28:27] ✅ Encontradas 141 definiciones de IO activas.
|
||||
[12:28:27] 🔍 Buscando usos de variables definidas en: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat/scl
|
||||
[12:28:27] 📄 Analizando uso en: ADSVARREAD.scl
|
||||
[12:28:27] 📄 Analizando uso en: ADSVARTRANSLATE.scl
|
||||
[12:28:27] 📄 Analizando uso en: ADSVARWRITE.scl
|
||||
[12:28:27] 📄 Analizando uso en: AMMONIACTRL.scl
|
||||
[12:28:27] 📄 Analizando uso en: ARRAYTOREAL.scl
|
||||
[12:28:27] 📄 Analizando uso en: BLENDERPROCEDURE_VARIABLES.scl
|
||||
[12:28:27] 📄 Analizando uso en: BLENDERRINSE.scl
|
||||
[12:28:27] 📄 Analizando uso en: BLENDER_PID_CTRL_LOOP.scl
|
||||
[12:28:27] 📄 Analizando uso en: BLENDER_PROCEDURECALL.scl
|
||||
[12:28:28] 📄 Analizando uso en: BLENDER_RUNCONTROL.scl
|
||||
[12:28:28] 📄 Analizando uso en: BLENDER_VARIABLES.scl
|
||||
[12:28:28] 📄 Analizando uso en: BLENDFILLRECSTRUCT.scl
|
||||
[12:28:28] 📄 Analizando uso en: BLENDFILLSENDSTRUCT.scl
|
||||
[12:28:28] 📄 Analizando uso en: BLENDFILLSYSTEM_STARTUP.scl
|
||||
[12:28:28] 📄 Analizando uso en: BRIXTRACKING.scl
|
||||
[12:28:28] 📄 Analizando uso en: BYTES_TO_DWORD.scl
|
||||
[12:28:28] 📄 Analizando uso en: BYTES_TO_WORD.scl
|
||||
[12:28:28] 📄 Analizando uso en: CALC_INJPRESS.scl
|
||||
[12:28:28] 📄 Analizando uso en: CARBOWATERLINE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CENTRALCIP_CTRL.scl
|
||||
[12:28:28] 📄 Analizando uso en: CETRIFUGAL_HEAD.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIPRECEIVESTRUCT.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIPSENDSTRUCT.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_CVQ.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_LINK_TYPE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_LIST_ELEMENT.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_MAIN.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_PROGRAM_VARIABLES.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_SIMPLE_TYPE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_STEP_TYPE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CIP_WAITEVENT_TYPE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CLEANBOOLARRAY.scl
|
||||
[12:28:28] 📄 Analizando uso en: CLOCK_SIGNAL.scl
|
||||
[12:28:28] 📄 Analizando uso en: CLOCK_VARIABLES.scl
|
||||
[12:28:28] 📄 Analizando uso en: CO2EQPRESS.scl
|
||||
[12:28:28] 📄 Analizando uso en: CO2INJPRESSURE.scl
|
||||
[12:28:28] 📄 Analizando uso en: CO2_SOLUBILITY.scl
|
||||
[12:28:28] 📄 Analizando uso en: CONVERTREAL.scl
|
||||
[12:28:28] 📄 Analizando uso en: CVQ_0_6_PERC.scl
|
||||
[12:28:28] 📄 Analizando uso en: CVQ_1P7_8_PERC.scl
|
||||
[12:28:28] 📄 Analizando uso en: DATA_FROM_CIP.scl
|
||||
[12:28:28] 📄 Analizando uso en: DATA_TO_CIP.scl
|
||||
[12:28:28] 📄 Analizando uso en: DEAIRCO2TEMPCOMP.scl
|
||||
[12:28:28] 📄 Analizando uso en: DEAIREATIONVALVE.scl
|
||||
[12:28:28] 📄 Analizando uso en: DEAIREATOR_STARTUP.scl
|
||||
[12:28:29] 📄 Analizando uso en: DELAY.scl
|
||||
[12:28:29] 📄 Analizando uso en: DELTAP.scl
|
||||
[12:28:29] 📄 Analizando uso en: DENSIMETER_CALIBRATION.scl
|
||||
[12:28:29] 📄 Analizando uso en: DERIVE.scl
|
||||
[12:28:29] 📄 Analizando uso en: DEVICENET_VARIABLES.scl
|
||||
[12:28:29] 📄 Analizando uso en: DWORD_TO_BYTES.scl
|
||||
[12:28:29] 📄 Analizando uso en: EXEC_SIMPLE_CIP.scl
|
||||
[12:28:29] 📄 Analizando uso en: FASTRINSE.scl
|
||||
[12:28:29] 📄 Analizando uso en: FB41_PIDCONTROLLER.scl
|
||||
[12:28:29] 📄 Analizando uso en: FC_CONTROL_WORD.scl
|
||||
[12:28:29] 📄 Analizando uso en: FC_STATUS_WORD.scl
|
||||
[12:28:29] 📄 Analizando uso en: FEEDFORWARD.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLERHEAD.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLERRECEIVESTRUCT.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLERRINSE.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLERRINSETANK_CTRL.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLERSENDSTRUCT.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLER_CONTROL.scl
|
||||
[12:28:29] 📄 Analizando uso en: FILLINGTIME.scl
|
||||
[12:28:29] 📄 Analizando uso en: FIRSTPRODUCTION.scl
|
||||
[12:28:29] 📄 Analizando uso en: FLOW_TO_PRESS_LOSS.scl
|
||||
[12:28:29] 📄 Analizando uso en: FREQ_TO_MMH2O.scl
|
||||
[12:28:29] 📄 Analizando uso en: FRICTIONLOSS.scl
|
||||
[12:28:29] 📄 Analizando uso en: GETPRODBRIXCO2_FROMANALOGINPUT.scl
|
||||
[12:28:29] 📄 Analizando uso en: GETPRODO2_FROMANALOGINPUT.scl
|
||||
[12:28:29] 📄 Analizando uso en: GLOBAL_ALARMS.scl
|
||||
[12:28:29] 📄 Analizando uso en: GLOBAL_VARIABLES_IN_OUT.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_ALARMS.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_BLENDER_PARAMETERS.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_IO_SHOWING.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_LOCAL_CIP_VARIABLES.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_SERVICE.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_VARIABLES_CMD.scl
|
||||
[12:28:30] 📄 Analizando uso en: HMI_VARIABLES_STATUS.scl
|
||||
[12:28:30] 📄 Analizando uso en: INPUT.scl
|
||||
[12:28:30] 📄 Analizando uso en: INPUT_CIP_SIGNALS.scl
|
||||
[12:28:30] 📄 Analizando uso en: INPUT_SIGNAL.scl
|
||||
[12:28:30] 📄 Analizando uso en: INTEGRAL.scl
|
||||
[12:28:30] 📄 Analizando uso en: LOCALCIP_CTRL.scl
|
||||
[12:28:30] 📄 Analizando uso en: LOWPASSFILTER.scl
|
||||
[12:28:30] 📄 Analizando uso en: LOWPASSFILTEROPT.scl
|
||||
[12:28:30] 📄 Analizando uso en: MASELLI.scl
|
||||
[12:28:30] 📄 Analizando uso en: MASELLIOPTO_TYPE.scl
|
||||
[12:28:30] 📄 Analizando uso en: MASELLIUC05_TYPE.scl
|
||||
[12:28:30] 📄 Analizando uso en: MASELLIUR22_TYPE.scl
|
||||
[12:28:30] 📄 Analizando uso en: MASELLI_CONTROL.scl
|
||||
[12:28:30] 📄 Analizando uso en: MAXCARBOCO2_VOL.scl
|
||||
[12:28:30] 📄 Analizando uso en: MESSAGESCROLL.scl
|
||||
[12:28:30] 📄 Analizando uso en: MESSAGE_SCROLL.scl
|
||||
[12:28:30] 📄 Analizando uso en: MFMANALOG_VALUES.scl
|
||||
[12:28:30] 📄 Analizando uso en: MFM_REAL_STRUCT.scl
|
||||
[12:28:30] 📄 Analizando uso en: MMH2O_TO_FREQ.scl
|
||||
[12:28:30] 📄 Analizando uso en: MODVALVEFAULT.scl
|
||||
[12:28:30] 📄 Analizando uso en: MOVEARRAY.scl
|
||||
[12:28:30] 📄 Analizando uso en: MPDS1000.scl
|
||||
[12:28:30] 📄 Analizando uso en: MPDS1000_CONTROL.scl
|
||||
[12:28:31] 📄 Analizando uso en: MPDS1000_TYPE.scl
|
||||
[12:28:31] 📄 Analizando uso en: MPDS2000.scl
|
||||
[12:28:31] 📄 Analizando uso en: MPDS2000_CONTROL.scl
|
||||
[12:28:31] 📄 Analizando uso en: MPDS2000_TYPE.scl
|
||||
[12:28:31] 📄 Analizando uso en: MPDS_PA_CONTROL.scl
|
||||
[12:28:31] 📄 Analizando uso en: MSE_SLOPE.scl
|
||||
[12:28:31] 📄 Analizando uso en: MYVAR.scl
|
||||
[12:28:31] 📄 Analizando uso en: OR_ARRAYBOOL.scl
|
||||
[12:28:31] 📄 Analizando uso en: OUTPUT.scl
|
||||
[12:28:31] 📄 Analizando uso en: PARAMETERNAMETYPE.scl
|
||||
[12:28:31] 📄 Analizando uso en: PA_MPDS.scl
|
||||
[12:28:31] 📄 Analizando uso en: PERIPHERIAL.scl
|
||||
[12:28:31] 📄 Analizando uso en: PID_VARIABLES.scl
|
||||
[12:28:31] 📄 Analizando uso en: PLC CONFIGURATION.scl
|
||||
[12:28:31] 📄 Analizando uso en: PNEUMATIC_VALVE_CTRL.scl
|
||||
[12:28:31] 📄 Analizando uso en: PPM_O2.scl
|
||||
[12:28:31] 📄 Analizando uso en: PRODBRIXRECOVERY.scl
|
||||
[12:28:31] 📄 Analizando uso en: PRODTANK_DRAIN.scl
|
||||
[12:28:31] 📄 Analizando uso en: PRODTANK_RUNOUT.scl
|
||||
[12:28:31] 📄 Analizando uso en: PRODUCTAVAILABLE.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTION_VARIABLES.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTLITERINTANK.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTPIPEDRAIN.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTPIPERUNOUT.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTQUALITY.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTTANKBRIX.scl
|
||||
[12:28:32] 📄 Analizando uso en: PRODUCTTANK_PRESSCTRL.scl
|
||||
[12:28:32] 📄 Analizando uso en: PROFIBUS_DATA.scl
|
||||
[12:28:32] 📄 Analizando uso en: PROFIBUS_NETWORK.scl
|
||||
[12:28:32] 📄 Analizando uso en: PROFIBUS_VARIABLES.scl
|
||||
[12:28:32] 📄 Analizando uso en: PULSEPRESSURE.scl
|
||||
[12:28:32] 📄 Analizando uso en: PUMPSCONTROL.scl
|
||||
[12:28:32] 📄 Analizando uso en: READANALOGIN.scl
|
||||
[12:28:32] 📄 Analizando uso en: READPERIPHERIAL.scl
|
||||
[12:28:32] 📄 Analizando uso en: SAFETIES.scl
|
||||
[12:28:32] 📄 Analizando uso en: SELCHECKBRIXSOURCE.scl
|
||||
[12:28:32] 📄 Analizando uso en: SIGNALS_INTEFACE.scl
|
||||
[12:28:32] 📄 Analizando uso en: SIGNAL_GEN.scl
|
||||
[12:28:32] 📄 Analizando uso en: SINUSOIDAL_SIGNAL.scl
|
||||
[12:28:32] 📄 Analizando uso en: SLEWLIMIT.scl
|
||||
[12:28:32] 📄 Analizando uso en: SLIM_BLOCK.scl
|
||||
[12:28:32] 📄 Analizando uso en: SLIM_VARIABLES.scl
|
||||
[12:28:32] 📄 Analizando uso en: SOFTNET_VARIABLES.scl
|
||||
[12:28:32] 📄 Analizando uso en: SPEEDADJUST.scl
|
||||
[12:28:32] 📄 Analizando uso en: SP_AND_P_VARIABLES.scl
|
||||
[12:28:32] 📄 Analizando uso en: STANDARD.LIB_5.6.98 09_39_02.scl
|
||||
[12:28:32] 📄 Analizando uso en: STATISTICALANALISYS.scl
|
||||
[12:28:32] 📄 Analizando uso en: SYRBRIX_AUTOCORRECTION.scl
|
||||
[12:28:32] 📄 Analizando uso en: SYRUPDENSITY.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYRUPROOMCTRL.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYRUP_LINE_MFM_PREP.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYRUP_MFM_STARTUP.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYRUP_RUNOUT.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYSTEMRUNOUT_VARIABLES.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYSTEM_DATAS.scl
|
||||
[12:28:33] 📄 Analizando uso en: SYSTEM_RUN_OUT.scl
|
||||
[12:28:33] 📄 Analizando uso en: TANKLEVEL.scl
|
||||
[12:28:33] 📄 Analizando uso en: TANKLEVELTOHEIGHT.scl
|
||||
[12:28:33] 📄 Analizando uso en: TASK CONFIGURATION.scl
|
||||
[12:28:33] 📄 Analizando uso en: TCPLCUTILITIES.LIB_11.12.01 09_39_02.scl
|
||||
[12:28:33] 📄 Analizando uso en: TCSYSTEM.LIB_16.9.02 09_39_02.scl
|
||||
[12:28:33] 📄 Analizando uso en: TESTFLOWMETERS.scl
|
||||
[12:28:33] 📄 Analizando uso en: UDP_STRUCT.scl
|
||||
[12:28:33] 📄 Analizando uso en: UV_LAMP.scl
|
||||
[12:28:33] 📄 Analizando uso en: VACUUMCTRL.scl
|
||||
[12:28:33] 📄 Analizando uso en: VALVEFAULT.scl
|
||||
[12:28:33] 📄 Analizando uso en: VALVEFLOW.scl
|
||||
[12:28:33] 📄 Analizando uso en: VARIABLE_CONFIGURATION.scl
|
||||
[12:28:33] 📄 Analizando uso en: VOID.scl
|
||||
[12:28:33] 📄 Analizando uso en: WATERDENSITY.scl
|
||||
[12:28:33] 📄 Analizando uso en: WORD_TO_BYTES.scl
|
||||
[12:28:33] 📄 Analizando uso en: WRITEPERIPHERIAL.scl
|
||||
[12:28:33] 📄 Analizando uso en: _BLENDER_CTRL_MAIN.scl
|
||||
[12:28:33] 📄 Analizando uso en: _BLENDER_PID_MAIN.scl
|
||||
[12:28:34] 📄 Analizando uso en: _BOOLARRAY_TO_DWORD.scl
|
||||
[12:28:34] 📄 Analizando uso en: _BOOLARRAY_TO_WORD.scl
|
||||
[12:28:34] 📄 Analizando uso en: _DWORD_SWAP_BYTEARRAY.scl
|
||||
[12:28:34] 📄 Analizando uso en: _DWORD_TO_BOOLARRAY.scl
|
||||
[12:28:34] 📄 Analizando uso en: _FILLING_HEAD_PID_CTRL.scl
|
||||
[12:28:34] 📄 Analizando uso en: _PUMPCONTROL.scl
|
||||
[12:28:34] 📄 Analizando uso en: _STEPMOVE.scl
|
||||
[12:28:34] 📄 Analizando uso en: _WORD_TO_BOOLARRAY.scl
|
||||
[12:28:34] ✅ Encontrados 224 usos para 83 variables distintas.
|
||||
[12:28:34] 📄 Generando tabla resumen: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat\TwinCAT_Full_IO_List.md
|
||||
[12:28:34] ✅ Tabla resumen generada exitosamente.
|
||||
[12:28:34] 📄 Generando reporte de snippets: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat\TwinCAT_IO_Usage_Snippets.md
|
||||
[12:28:34] Generando snippets para 83 variables con uso...
|
||||
[12:28:34] 📝 Procesando 1/83: AI_ProductTankLevel (1 usos)
|
||||
[12:28:34] 📝 Procesando 2/83: AI_ProductTankPressure (1 usos)
|
||||
[12:28:34] 📝 Procesando 3/83: AI_DeaireationValve_VEP4 (2 usos)
|
||||
[12:28:34] 📝 Procesando 4/83: AI_ProdTankPressureValve_VEP1 (1 usos)
|
||||
[12:28:34] 📝 Procesando 5/83: AI_ProductTemperature (1 usos)
|
||||
[12:28:34] 📝 Procesando 6/83: AI_SyrupTankLevel (1 usos)
|
||||
[12:28:34] 📝 Procesando 7/83: AI_DeairWaterTemperature (1 usos)
|
||||
[12:28:34] 📝 Procesando 8/83: AI_InjectionPressure (2 usos)
|
||||
[12:28:34] 📝 Procesando 9/83: gProduct_VFC_MainActualValue (1 usos)
|
||||
[12:28:34] 📝 Procesando 10/83: DI_AuxVoltage_On (1 usos)
|
||||
[12:28:34] 📝 Procesando 11/83: DI_Reset_Horn_Btn (2 usos)
|
||||
[12:28:34] 📝 Procesando 12/83: DI_Reset_Btn (79 usos)
|
||||
[12:28:34] 📝 Procesando 13/83: DI_Blender_Stop_Btn (3 usos)
|
||||
[12:28:34] 📝 Procesando 14/83: DI_Blender_Start_Btn (1 usos)
|
||||
[12:28:34] 📝 Procesando 15/83: DI_PowerSuppliesOk (3 usos)
|
||||
[12:28:34] 📝 Procesando 16/83: DI_Min_Deair_Level (1 usos)
|
||||
[12:28:34] 📝 Procesando 17/83: DI_ProdTankEmpty (1 usos)
|
||||
[12:28:34] 📝 Procesando 18/83: DI_BatteryNotReady (1 usos)
|
||||
[12:28:34] 📝 Procesando 19/83: DI_VM1_Water_Valve_Closed (1 usos)
|
||||
[12:28:34] 📝 Procesando 20/83: DI_VM2_Syrup_Valve_Closed (1 usos)
|
||||
[12:28:34] 📝 Procesando 21/83: DI_VM3_CO2_Valve_Closed (1 usos)
|
||||
[12:28:34] 📝 Procesando 22/83: DI_Water_Pump_Contactor (1 usos)
|
||||
[12:28:34] 📝 Procesando 23/83: DI_Syrup_Pump_Ovrld (1 usos)
|
||||
[12:28:34] 📝 Procesando 24/83: DI_Syrup_Pump_Contactor (1 usos)
|
||||
[12:28:34] 📝 Procesando 25/83: DI_Product_Pump_Contactor (1 usos)
|
||||
[12:28:34] 📝 Procesando 26/83: DI_SyrRoom_Pump_Ready (1 usos)
|
||||
[12:28:34] 📝 Procesando 27/83: DI_CIP_CIPMode (1 usos)
|
||||
[12:28:34] 📝 Procesando 28/83: DI_CIP_RinseMode (1 usos)
|
||||
[12:28:34] 📝 Procesando 29/83: DI_CIP_DrainRequest (1 usos)
|
||||
[12:28:34] 📝 Procesando 30/83: DI_CIP_CIPCompleted (1 usos)
|
||||
[12:28:34] 📝 Procesando 31/83: DI_Air_InletPress_OK (1 usos)
|
||||
[12:28:34] 📝 Procesando 32/83: DI_Syrup_Line_Drain_Sensor (1 usos)
|
||||
[12:28:34] 📝 Procesando 33/83: gWaterTotCtrl_Node20 (3 usos)
|
||||
[12:28:34] 📝 Procesando 34/83: gSyrControl_Node21 (7 usos)
|
||||
[12:28:34] 📝 Procesando 35/83: gCO2Control_Node22 (7 usos)
|
||||
[12:28:34] 📝 Procesando 36/83: gProductTotCtrl_Node17 (3 usos)
|
||||
[12:28:34] 📝 Procesando 37/83: AO_WaterCtrlValve_VM1 (1 usos)
|
||||
[12:28:34] 📝 Procesando 38/83: AO_SyrupCtrlValve_VM2 (1 usos)
|
||||
[12:28:34] 📝 Procesando 39/83: AO_CarboCO2CtrlValve_VM3 (1 usos)
|
||||
[12:28:34] 📝 Procesando 40/83: AO_ProdTankPressureValve_VEP1 (1 usos)
|
||||
[12:28:34] 📝 Procesando 41/83: AO_DeaireationValve_VEP4 (2 usos)
|
||||
[12:28:34] 📝 Procesando 42/83: AO_ProdTempCtrlValve (1 usos)
|
||||
[12:28:34] 📝 Procesando 43/83: AO_SyrupInletValve_VEP3 (1 usos)
|
||||
[12:28:34] 📝 Procesando 44/83: AO_InjectionPressure (1 usos)
|
||||
[12:28:34] 📝 Procesando 45/83: gProduct_VFC_MainRefValue (1 usos)
|
||||
[12:28:34] 📝 Procesando 46/83: DO_SyrupInletValve_Enable (1 usos)
|
||||
[12:28:34] 📝 Procesando 47/83: DO_HoldBrixMeter (2 usos)
|
||||
[12:28:34] 📝 Procesando 48/83: DO_SyrupRoomPump_Run (2 usos)
|
||||
[12:28:34] 📝 Procesando 49/83: DO_SyrupRoomWaterReq (2 usos)
|
||||
[12:28:34] 📝 Procesando 50/83: DO_CIP_CIPRequest (2 usos)
|
||||
[12:28:34] 📝 Procesando 51/83: DO_CIP_DrainCompleted (2 usos)
|
||||
[12:28:34] 📝 Procesando 52/83: DO_Horn (2 usos)
|
||||
[12:28:34] 📝 Procesando 53/83: DO_Blender_Run_Lamp (2 usos)
|
||||
[12:28:34] 📝 Procesando 54/83: DO_Alarm_Lamp (2 usos)
|
||||
[12:28:34] 📝 Procesando 55/83: DO_RotorAlarm_Lamp (2 usos)
|
||||
[12:28:34] 📝 Procesando 56/83: DO_Water_Pump_Run (2 usos)
|
||||
[12:28:34] 📝 Procesando 57/83: DO_Syrup_Pump_Run (2 usos)
|
||||
[12:28:34] 📝 Procesando 58/83: DO_Product_Pump_Run (3 usos)
|
||||
[12:28:34] 📝 Procesando 59/83: DO_EV11_BlowOff_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 60/83: DO_EV13_Prod_Recirc_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 61/83: DO_EV14_DeairDrain_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 62/83: DO_EV15_ProductTank_Drain_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 63/83: DO_EV16_SyrupTank_Drain_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 64/83: DO_EV17_BufferTankSprayBall_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 65/83: DO_EV18_DeairOverfill_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 66/83: DO_EV21_ProdTankOverfill_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 67/83: DO_EV22_WaterPumpPrime_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 68/83: DO_EV23_SerpentineDrain_valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 69/83: DO_EV24_SyrupRecirc_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 70/83: DO_EV26_CO2InjShutOff_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 71/83: DO_EV27_DeairSprayBall_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 72/83: DO_EV28_DeairStartCO2Inj_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 73/83: DO_EV44_SyrupLineDrain (2 usos)
|
||||
[12:28:34] 📝 Procesando 74/83: DO_EV45_ProductChillerDrain (2 usos)
|
||||
[12:28:34] 📝 Procesando 75/83: DO_EV61_SyrupTankSprayBall (2 usos)
|
||||
[12:28:34] 📝 Procesando 76/83: DO_EV62_ProductOutlet (3 usos)
|
||||
[12:28:34] 📝 Procesando 77/83: DO_EV69_Blender_ProductPipeDrain (2 usos)
|
||||
[12:28:34] 📝 Procesando 78/83: DO_EV81_Prod_Recirc_Chiller_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 79/83: DO_EV01_Deair_Lvl_Ctrl_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 80/83: DO_EV02_Deair_FillUp_Valve (2 usos)
|
||||
[12:28:34] 📝 Procesando 81/83: gPAmPDSFreeze (2 usos)
|
||||
[12:28:34] 📝 Procesando 82/83: gPAmPDSCarboStop (2 usos)
|
||||
[12:28:34] 📝 Procesando 83/83: gPAmPDSInlinePumpStop (2 usos)
|
||||
[12:28:34] Generando tabla para 58 variables no usadas...
|
||||
[12:28:34] ✅ Reporte de snippets generado exitosamente.
|
||||
[12:28:34] 📄 Generando reporte JSON: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat\TwinCAT_IO_Usage_Snippets.json
|
||||
[12:28:34] ✅ Reporte JSON generado exitosamente.
|
||||
[12:28:34] 🎉 Análisis completado exitosamente!
|
||||
[12:28:34] 📁 Archivos generados en: C:\Trabajo\SIDEL\13 - E5.007560 - Modifica O&U - SAE235\Reporte\Analisis\TwinCat
|
||||
[12:28:34] 📄 TwinCAT_Full_IO_List.md
|
||||
[12:28:34] 📄 TwinCAT_IO_Usage_Snippets.md
|
||||
[12:28:34] 📄 TwinCAT_IO_Usage_Snippets.json
|
||||
[12:28:34] Ejecución de x1.5_full_io_documentation.py finalizada (success). Duración: 0:00:07.683469.
|
||||
[12:28:34] Log completo guardado en: D:\Proyectos\Scripts\ParamManagerScripts\backend\script_groups\TwinCat\log_x1.5_full_io_documentation.txt
|
||||
|
|
|
@ -3,21 +3,25 @@
|
|||
Factory class for creating language detection services
|
||||
"""
|
||||
from typing import Optional, Set
|
||||
from .base import LanguageDetectionService
|
||||
from .langid_service import LangIdService
|
||||
|
||||
|
||||
class LanguageFactory:
|
||||
"""Factory class for creating language detection service instances"""
|
||||
|
||||
|
||||
@staticmethod
|
||||
def create_service(service_type: str, allowed_languages: Optional[Set[str]] = None, **kwargs) -> Optional['LanguageDetectionService']:
|
||||
def create_service(
|
||||
service_type: str, allowed_languages: Optional[Set[str]] = None, **kwargs
|
||||
) -> Optional[LanguageDetectionService]:
|
||||
"""
|
||||
Create an instance of the specified language detection service
|
||||
|
||||
|
||||
Args:
|
||||
service_type: Type of language detection service ("langid", etc.)
|
||||
allowed_languages: Set of allowed language codes
|
||||
**kwargs: Additional arguments for service initialization
|
||||
|
||||
|
||||
Returns:
|
||||
LanguageDetectionService instance or None if service_type is not recognized
|
||||
"""
|
||||
|
@ -25,9 +29,9 @@ class LanguageFactory:
|
|||
"langid": LangIdService,
|
||||
# Add other language detection services here
|
||||
}
|
||||
|
||||
|
||||
service_class = services.get(service_type.lower())
|
||||
if service_class:
|
||||
return service_class(allowed_languages=allowed_languages, **kwargs)
|
||||
else:
|
||||
raise ValueError(f"Unknown language detection service type: {service_type}")
|
||||
raise ValueError(f"Unknown language detection service type: {service_type}")
|
||||
|
|
|
@ -0,0 +1,103 @@
|
|||
# services/llm/claude_service.py
|
||||
"""
|
||||
Claude (Anthropic) service implementation
|
||||
"""
|
||||
import anthropic
|
||||
from typing import Dict, List
|
||||
import json
|
||||
from .base import LLMService
|
||||
from config.api_keys import APIKeyManager
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class ClaudeService(LLMService):
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "claude-3-7-sonnet-20250219", # Debemos usar el modelo claude-3-7-sonnet-20250219
|
||||
temperature: float = 0.3,
|
||||
max_tokens: int = 16000,
|
||||
):
|
||||
api_key = APIKeyManager.get_claude_key()
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"Claude API key not found. Please set the CLAUDE_API_KEY environment variable."
|
||||
)
|
||||
|
||||
self.client = anthropic.Anthropic(api_key=api_key)
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.logger = setup_logger("claude")
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
try:
|
||||
message = self.client.messages.create(
|
||||
model=self.model,
|
||||
max_tokens=self.max_tokens,
|
||||
temperature=self.temperature,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": prompt,
|
||||
}
|
||||
],
|
||||
)
|
||||
response_content = message.content[0].text
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in Claude API call: {e}")
|
||||
print(f"Error in Claude API call: {e}")
|
||||
return None
|
||||
|
||||
def get_similarity_scores(self, texts_pairs: Dict[str, List[str]]) -> List[float]:
|
||||
# Claude's API doesn't have a dedicated similarity or JSON mode endpoint as straightforward as others.
|
||||
# We will instruct it to return JSON.
|
||||
system_prompt = (
|
||||
"You are an expert in semantic analysis. Evaluate the semantic similarity between the pairs of texts provided. "
|
||||
"Return your response ONLY as a JSON object containing a single key 'similarity_scores' with a list of floats from 0.0 to 1.0. "
|
||||
"Do not include any other text, explanation, or markdown formatting. The output must be a valid JSON."
|
||||
)
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
|
||||
try:
|
||||
message = self.client.messages.create(
|
||||
model=self.model,
|
||||
max_tokens=self.max_tokens,
|
||||
temperature=self.temperature,
|
||||
system=system_prompt,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": request_payload,
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
response_content = message.content[0].text
|
||||
|
||||
try:
|
||||
# Find the JSON part of the response
|
||||
json_start = response_content.find("{")
|
||||
json_end = response_content.rfind("}") + 1
|
||||
if json_start == -1 or json_end == 0:
|
||||
raise ValueError("No JSON object found in the response.")
|
||||
|
||||
json_str = response_content[json_start:json_end]
|
||||
scores_data = json.loads(json_str)
|
||||
|
||||
if isinstance(scores_data, dict) and "similarity_scores" in scores_data:
|
||||
return scores_data["similarity_scores"]
|
||||
else:
|
||||
raise ValueError("Unexpected JSON format from Claude.")
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
print(f"Error decoding Claude JSON response: {e}")
|
||||
raise ValueError(
|
||||
"Could not decode or parse similarity scores from Claude response."
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in Claude similarity calculation: {e}")
|
||||
return None
|
|
@ -0,0 +1,83 @@
|
|||
# services/llm/gemini_service.py
|
||||
"""
|
||||
Gemini (Google) service implementation
|
||||
"""
|
||||
import google.generativeai as genai
|
||||
from typing import Dict, List
|
||||
import json
|
||||
from .base import LLMService
|
||||
from config.api_keys import APIKeyManager
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class GeminiService(LLMService):
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gemini-1.5-flash",
|
||||
temperature: float = 0.3,
|
||||
max_tokens: int = 16000,
|
||||
):
|
||||
api_key = APIKeyManager.get_gemini_key()
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"Gemini API key not found. Please set the GEMINI_API_KEY environment variable."
|
||||
)
|
||||
|
||||
genai.configure(api_key=api_key)
|
||||
self.model = genai.GenerativeModel(model)
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.logger = setup_logger("gemini")
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
try:
|
||||
generation_config = genai.types.GenerationConfig(
|
||||
max_output_tokens=self.max_tokens, temperature=self.temperature
|
||||
)
|
||||
response = self.model.generate_content(
|
||||
prompt, generation_config=generation_config
|
||||
)
|
||||
response_content = response.text
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in Gemini API call: {e}")
|
||||
print(f"Error in Gemini API call: {e}")
|
||||
return None
|
||||
|
||||
def get_similarity_scores(self, texts_pairs: Dict[str, List[str]]) -> List[float]:
|
||||
system_prompt = (
|
||||
"You are an expert in semantic analysis. Evaluate the semantic similarity between the pairs of texts provided. "
|
||||
"Return your response ONLY as a JSON object containing a single key 'similarity_scores' with a list of floats from 0.0 to 1.0. "
|
||||
"Do not include any other text, explanation, or markdown formatting. The output must be a valid JSON."
|
||||
)
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
full_prompt = f"{system_prompt}\n\n{request_payload}"
|
||||
|
||||
try:
|
||||
generation_config = genai.types.GenerationConfig(
|
||||
max_output_tokens=self.max_tokens,
|
||||
temperature=self.temperature,
|
||||
response_mime_type="application/json",
|
||||
)
|
||||
response = self.model.generate_content(
|
||||
full_prompt, generation_config=generation_config
|
||||
)
|
||||
response_content = response.text
|
||||
|
||||
try:
|
||||
scores_data = json.loads(response_content)
|
||||
if isinstance(scores_data, dict) and "similarity_scores" in scores_data:
|
||||
return scores_data["similarity_scores"]
|
||||
else:
|
||||
raise ValueError("Unexpected JSON format from Gemini.")
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
print(f"Error decoding Gemini JSON response: {e}")
|
||||
raise ValueError(
|
||||
"Could not decode or parse similarity scores from Gemini response."
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error in Gemini similarity calculation: {e}")
|
||||
return None
|
|
@ -1,63 +1,106 @@
|
|||
# services/llm/grok_service.py
|
||||
"""
|
||||
Grok service implementation
|
||||
Grok (xAI) service implementation
|
||||
"""
|
||||
import httpx
|
||||
from typing import Dict, List, Optional
|
||||
import json
|
||||
from .base import LLMService
|
||||
from config.api_keys import APIKeyManager
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class GrokService(LLMService):
|
||||
def __init__(self, model: str = "grok-1", temperature: float = 0.3):
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "grok-3-mini-fast",
|
||||
temperature: float = 0.3,
|
||||
max_tokens: int = 16000,
|
||||
): # Debemos usar el modelo grok-3-mini-fast
|
||||
api_key = APIKeyManager.get_grok_key()
|
||||
if not api_key:
|
||||
raise ValueError("Grok API key not found. Please set up your API keys.")
|
||||
|
||||
raise ValueError(
|
||||
"Grok API key not found. Please set the GROK_API_KEY environment variable."
|
||||
)
|
||||
|
||||
self.api_key = api_key
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.base_url = "https://api.x.ai/v1"
|
||||
self.client = httpx.Client(
|
||||
headers={
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
)
|
||||
self.logger = setup_logger("grok_xai")
|
||||
|
||||
def _send_request(self, payload: Dict) -> Optional[Dict]:
|
||||
"""Sends a request to the Grok API."""
|
||||
try:
|
||||
response = self.client.post(
|
||||
f"{self.base_url}/chat/completions", json=payload, timeout=60
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except httpx.HTTPStatusError as e:
|
||||
self.logger.error(
|
||||
f"Error in Grok API call: {e.response.status_code} - {e.response.text}"
|
||||
)
|
||||
print(
|
||||
f"Error in Grok API call: {e.response.status_code} - {e.response.text}"
|
||||
)
|
||||
return None
|
||||
except Exception as e:
|
||||
self.logger.error(f"An unexpected error occurred: {e}")
|
||||
print(f"An unexpected error occurred: {e}")
|
||||
return None
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
"""
|
||||
Generate text using the Grok API
|
||||
TODO: Update this method when Grok API is available
|
||||
"""
|
||||
try:
|
||||
# Placeholder for Grok API implementation
|
||||
# Update this when the API is released
|
||||
raise NotImplementedError("Grok API is not implemented yet")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in Grok API call: {e}")
|
||||
return None
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"temperature": self.temperature,
|
||||
"max_tokens": self.max_tokens,
|
||||
}
|
||||
response_data = self._send_request(payload)
|
||||
if response_data and response_data.get("choices"):
|
||||
response_content = response_data["choices"][0]["message"]["content"]
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
return "Failed to get a response from Grok."
|
||||
|
||||
def get_similarity_scores(self, texts_pairs: Dict[str, List[str]]) -> List[float]:
|
||||
"""
|
||||
Calculate similarity scores using the Grok API
|
||||
TODO: Update this method when Grok API is available
|
||||
"""
|
||||
try:
|
||||
system_prompt = (
|
||||
"Evaluate the semantic similarity between the following table of pairs of texts "
|
||||
"in json format on a scale from 0 to 1. Return the similarity scores for every "
|
||||
"row in JSON format as a list of numbers, without any additional text or formatting."
|
||||
)
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
|
||||
# Placeholder for Grok API implementation
|
||||
# Update this when the API is released
|
||||
raise NotImplementedError("Grok API is not implemented yet")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in Grok similarity calculation: {e}")
|
||||
return None
|
||||
system_prompt = (
|
||||
"You are an expert in semantic analysis. Evaluate the semantic similarity between the pairs of texts provided. "
|
||||
"Return your response ONLY as a JSON object containing a single key 'similarity_scores' with a list of floats from 0.0 to 1.0. "
|
||||
"Do not include any other text, explanation, or markdown formatting. The output must be a valid JSON."
|
||||
)
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
|
||||
# Update config/api_keys.py to include Grok
|
||||
@classmethod
|
||||
def get_grok_key(cls) -> Optional[str]:
|
||||
"""Get Grok API key from environment or stored configuration"""
|
||||
return (
|
||||
os.getenv('GROK_API_KEY') or
|
||||
cls._get_stored_key('grok')
|
||||
)
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"messages": [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": request_payload},
|
||||
],
|
||||
"temperature": self.temperature,
|
||||
"max_tokens": self.max_tokens,
|
||||
"response_format": {"type": "json_object"},
|
||||
}
|
||||
|
||||
response_data = self._send_request(payload)
|
||||
if response_data and response_data.get("choices"):
|
||||
response_content = response_data["choices"][0]["message"]["content"]
|
||||
try:
|
||||
scores_data = json.loads(response_content)
|
||||
if isinstance(scores_data, dict) and "similarity_scores" in scores_data:
|
||||
return scores_data["similarity_scores"]
|
||||
else:
|
||||
raise ValueError("Unexpected JSON format from Grok.")
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
print(f"Error decoding Grok JSON response: {e}")
|
||||
return None
|
||||
return None
|
||||
|
|
|
@ -0,0 +1,84 @@
|
|||
# services/llm/groq_service.py
|
||||
"""
|
||||
Groq service implementation
|
||||
"""
|
||||
from groq import Groq
|
||||
from typing import Dict, List
|
||||
import json
|
||||
from .base import LLMService
|
||||
from config.api_keys import APIKeyManager
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class GroqService(LLMService):
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "llama3-8b-8192",
|
||||
temperature: float = 0.3,
|
||||
max_tokens: int = 8000,
|
||||
):
|
||||
api_key = APIKeyManager.get_groq_key()
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"Groq API key not found. Please set the GROQ_API_KEY environment variable."
|
||||
)
|
||||
|
||||
self.client = Groq(api_key=api_key)
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.logger = setup_logger("groq")
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
try:
|
||||
response = self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
response_content = response.choices[0].message.content
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in Groq API call: {e}")
|
||||
print(f"Error in Groq API call: {e}")
|
||||
return None
|
||||
|
||||
def get_similarity_scores(self, texts_pairs: Dict[str, List[str]]) -> List[float]:
|
||||
system_prompt = (
|
||||
"Evaluate the semantic similarity between the following table of pairs of texts in json format on a scale from 0 to 1. "
|
||||
"Return the similarity scores for every row in JSON format as a list of numbers, without any additional text or formatting."
|
||||
)
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
|
||||
try:
|
||||
response = self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": request_payload},
|
||||
],
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
response_format={"type": "json_object"},
|
||||
)
|
||||
|
||||
response_content = response.choices[0].message.content
|
||||
|
||||
try:
|
||||
scores = json.loads(response_content)
|
||||
if isinstance(scores, dict) and "similarity_scores" in scores:
|
||||
return scores["similarity_scores"]
|
||||
elif isinstance(scores, list):
|
||||
return scores
|
||||
else:
|
||||
raise ValueError("Unexpected response format")
|
||||
except json.JSONDecodeError:
|
||||
raise ValueError("Could not decode response as JSON")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in Groq similarity calculation: {e}")
|
||||
return None
|
File diff suppressed because it is too large
Load Diff
|
@ -5,26 +5,34 @@ Factory class for creating LLM services
|
|||
from typing import Optional
|
||||
from .openai_service import OpenAIService
|
||||
from .ollama_service import OllamaService
|
||||
from .groq_service import GroqService
|
||||
from .claude_service import ClaudeService
|
||||
from .grok_service import GrokService
|
||||
from .gemini_service import GeminiService
|
||||
from .base import LLMService
|
||||
|
||||
|
||||
class LLMFactory:
|
||||
"""Factory class for creating LLM service instances"""
|
||||
|
||||
|
||||
@staticmethod
|
||||
def create_service(service_type: str, **kwargs) -> Optional['LLMService']:
|
||||
def create_service(service_type: str, **kwargs) -> Optional[LLMService]:
|
||||
"""
|
||||
Create an instance of the specified LLM service
|
||||
|
||||
|
||||
Args:
|
||||
service_type: Type of LLM service ("openai", "ollama", "grok")
|
||||
service_type: Type of LLM service ("openai", "ollama", "groq", "claude", "grok")
|
||||
**kwargs: Additional arguments for service initialization
|
||||
"""
|
||||
services = {
|
||||
"openai": OpenAIService,
|
||||
"ollama": OllamaService,
|
||||
"grok": GrokService
|
||||
"groq": GroqService,
|
||||
"claude": ClaudeService,
|
||||
"grok": GrokService,
|
||||
"gemini": GeminiService,
|
||||
}
|
||||
|
||||
|
||||
service_class = services.get(service_type.lower())
|
||||
if service_class:
|
||||
return service_class(**kwargs)
|
||||
|
|
|
@ -6,19 +6,29 @@ import ollama
|
|||
import json
|
||||
from typing import Dict, List
|
||||
from .base import LLMService
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class OllamaService(LLMService):
|
||||
def __init__(self, model: str = "llama3.1"):
|
||||
def __init__(self, model: str = "qwen3:latest", max_tokens: int = 4000):
|
||||
self.model = model
|
||||
# Explicitly set the host to avoid potential DNS/proxy issues with 'localhost'
|
||||
self.client = ollama.Client(host="127.0.0.1:11434")
|
||||
self.max_tokens = max_tokens
|
||||
self.logger = setup_logger("ollama")
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
try:
|
||||
response = ollama.generate(
|
||||
model=self.model,
|
||||
prompt=prompt
|
||||
options = {"num_predict": self.max_tokens}
|
||||
response = self.client.generate(
|
||||
model=self.model, prompt=prompt, options=options
|
||||
)
|
||||
return response["response"]
|
||||
response_content = response["response"]
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in Ollama API call: {e}")
|
||||
print(f"Error in Ollama API call: {e}")
|
||||
return None
|
||||
|
||||
|
@ -27,16 +37,16 @@ class OllamaService(LLMService):
|
|||
"Evaluate the semantic similarity between the following table of pairs of texts in json format on a scale from 0 to 1. "
|
||||
"Return the similarity scores for every row in JSON format as a list of numbers, without any additional text or formatting."
|
||||
)
|
||||
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
prompt = f"{system_prompt}\n\n{request_payload}"
|
||||
|
||||
|
||||
try:
|
||||
response = ollama.generate(
|
||||
model=self.model,
|
||||
prompt=prompt
|
||||
options = {"num_predict": self.max_tokens}
|
||||
response = self.client.generate(
|
||||
model=self.model, prompt=prompt, options=options
|
||||
)
|
||||
|
||||
|
||||
try:
|
||||
scores = json.loads(response["response"].strip())
|
||||
if isinstance(scores, dict) and "similarity_scores" in scores:
|
||||
|
@ -47,7 +57,7 @@ class OllamaService(LLMService):
|
|||
raise ValueError("Unexpected response format")
|
||||
except json.JSONDecodeError:
|
||||
raise ValueError("Could not decode response as JSON")
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in Ollama similarity calculation: {e}")
|
||||
return None
|
||||
return None
|
||||
|
|
|
@ -6,28 +6,43 @@ from openai import OpenAI
|
|||
from typing import Dict, List
|
||||
import json
|
||||
from .base import LLMService
|
||||
from openai_api_key import openai_api_key
|
||||
from config.api_keys import APIKeyManager
|
||||
from utils.logger import setup_logger
|
||||
|
||||
|
||||
class OpenAIService(LLMService):
|
||||
def __init__(self, model: str = "gpt-4o-mini", temperature: float = 0.3):
|
||||
api_key = openai_api_key()
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gpt-4o-mini",
|
||||
temperature: float = 0.3,
|
||||
max_tokens: int = 16000,
|
||||
):
|
||||
api_key = APIKeyManager.get_openai_key()
|
||||
if not api_key:
|
||||
raise ValueError("OpenAI API key not found. Please set up your API keys.")
|
||||
|
||||
raise ValueError(
|
||||
"OpenAI API key not found. Please set the OPENAI_API_KEY environment variable."
|
||||
)
|
||||
|
||||
self.client = OpenAI(api_key=api_key)
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.logger = setup_logger("openai")
|
||||
|
||||
def generate_text(self, prompt: str) -> str:
|
||||
self.logger.info(f"--- PROMPT ---\n{prompt}")
|
||||
try:
|
||||
response = self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
temperature=self.temperature,
|
||||
max_tokens=1500
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
return response.choices[0].message.content
|
||||
response_content = response.choices[0].message.content
|
||||
self.logger.info(f"--- RESPONSE ---\n{response_content}")
|
||||
return response_content
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in OpenAI API call: {e}")
|
||||
print(f"Error in OpenAI API call: {e}")
|
||||
return None
|
||||
|
||||
|
@ -36,23 +51,23 @@ class OpenAIService(LLMService):
|
|||
"Evaluate the semantic similarity between the following table of pairs of texts in json format on a scale from 0 to 1. "
|
||||
"Return the similarity scores for every row in JSON format as a list of numbers, without any additional text or formatting."
|
||||
)
|
||||
|
||||
|
||||
request_payload = json.dumps(texts_pairs)
|
||||
|
||||
|
||||
try:
|
||||
response = self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": request_payload}
|
||||
{"role": "user", "content": request_payload},
|
||||
],
|
||||
temperature=self.temperature,
|
||||
max_tokens=1500
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
|
||||
|
||||
response_content = response.choices[0].message.content
|
||||
cleaned_response = response_content.strip().strip("'```json").strip("```")
|
||||
|
||||
|
||||
try:
|
||||
scores = json.loads(cleaned_response)
|
||||
if isinstance(scores, dict) and "similarity_scores" in scores:
|
||||
|
@ -63,7 +78,7 @@ class OpenAIService(LLMService):
|
|||
raise ValueError("Unexpected response format")
|
||||
except json.JSONDecodeError:
|
||||
raise ValueError("Could not decode response as JSON")
|
||||
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in OpenAI similarity calculation: {e}")
|
||||
return None
|
||||
return None
|
||||
|
|
|
@ -0,0 +1,90 @@
|
|||
import os
|
||||
from dotenv import load_dotenv
|
||||
from llm_factory import LLMFactory
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to test the LLM services."""
|
||||
print("Testing LLM Services...")
|
||||
|
||||
# --- Test OpenAI ---
|
||||
try:
|
||||
print("\n--- Testing OpenAI ---")
|
||||
openai_service = LLMFactory.create_service("openai")
|
||||
if openai_service:
|
||||
prompt_openai = "Explain the importance of structured JSON output from LLMs in one sentence."
|
||||
response_openai = openai_service.generate_text(prompt_openai)
|
||||
print(f"OpenAI Prompt: {prompt_openai}")
|
||||
print(f"OpenAI Response: {response_openai}")
|
||||
else:
|
||||
print("Failed to create OpenAI service.")
|
||||
except Exception as e:
|
||||
print(f"Error testing OpenAI: {e}")
|
||||
|
||||
# --- Test Groq ---
|
||||
try:
|
||||
print("\n--- Testing Groq ---")
|
||||
groq_service = LLMFactory.create_service("groq")
|
||||
if groq_service:
|
||||
prompt_groq = (
|
||||
"Explain the concept of 'inference speed' for LLMs in one sentence."
|
||||
)
|
||||
response_groq = groq_service.generate_text(prompt_groq)
|
||||
print(f"Groq Prompt: {prompt_groq}")
|
||||
print(f"Groq Response: {response_groq}")
|
||||
else:
|
||||
print("Failed to create Groq service.")
|
||||
except Exception as e:
|
||||
print(f"Error testing Groq: {e}")
|
||||
|
||||
# --- Test Claude ---
|
||||
try:
|
||||
print("\n--- Testing Claude ---")
|
||||
claude_service = LLMFactory.create_service("claude")
|
||||
if claude_service:
|
||||
prompt_claude = (
|
||||
"What is Anthropic's Constitutional AI concept in one sentence?"
|
||||
)
|
||||
response_claude = claude_service.generate_text(prompt_claude)
|
||||
print(f"Claude Prompt: {prompt_claude}")
|
||||
print(f"Claude Response: {response_claude}")
|
||||
else:
|
||||
print("Failed to create Claude service.")
|
||||
except Exception as e:
|
||||
print(f"Error testing Claude: {e}")
|
||||
|
||||
# --- Test Grok (xAI) ---
|
||||
try:
|
||||
print("\n--- Testing Grok (xAI) ---")
|
||||
grok_service = LLMFactory.create_service("grok")
|
||||
if grok_service:
|
||||
prompt_grok = "What is the mission of xAI in one sentence?"
|
||||
response_grok = grok_service.generate_text(prompt_grok)
|
||||
print(f"Grok Prompt: {prompt_grok}")
|
||||
print(f"Grok Response: {response_grok}")
|
||||
else:
|
||||
print("Failed to create Grok service.")
|
||||
except Exception as e:
|
||||
print(f"Error testing Grok (xAI): {e}")
|
||||
|
||||
# --- Test Ollama ---
|
||||
try:
|
||||
print("\n--- Testing Ollama ---")
|
||||
# Make sure you have an Ollama model running, e.g., `ollama run llama3.1`
|
||||
ollama_service = LLMFactory.create_service("ollama", model="llama3.1")
|
||||
if ollama_service:
|
||||
prompt_ollama = "What is Ollama?"
|
||||
response_ollama = ollama_service.generate_text(prompt_ollama)
|
||||
print(f"Ollama Prompt: {prompt_ollama}")
|
||||
print(f"Ollama Response: {response_ollama}")
|
||||
else:
|
||||
print("Failed to create Ollama service.")
|
||||
except Exception as e:
|
||||
print(f"Error testing Ollama: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
Loading…
Reference in New Issue