* add ha_addons repository to cscode workspace

* Issue220 ha addon dokumentation update (#232)

* initial DOCS.md for Addon

* links to Mosquitto and Adguard

* replaced _ by . for PV-Strings

* mentioned add-on installation method in README.md

* fix most of the markdown linter warnings

* add missing alt texts

* added nice add repository to my Home Assistant badges

---------

Co-authored-by: Michael Metz <michael.metz@siemens.com>
Co-authored-by: Stefan Allius <stefan.allius@t-online.de>

* S allius/issue216 (#235)


* improve docker run

- establish multistage Dockerfile
- build a python wheel for all needed packages
- remove unneeded tools like apk for runtime

* pin versions, fix hadolint warnings

* merge from dev-0.12

---------

Co-authored-by: Michael Metz <michael.metz@siemens.com>

* Issue220 ha addon dokumentation update (#245)

* revised config disclaimer

* add newline at end of file to fix linter warning

---------

Co-authored-by: Michael Metz <michael.metz@siemens.com>

* 238 ha addon repository check (#244)

* move Makefile and bake file into parent folder

* build config.yaml from template

* use Makefile instead of build shell script

* ignore temporary or created files

* add rules for building the add-on repository

* add rel version of add-on

* add  jinja2-cli

* ignore inverter replays which a older than 1 day (#246)

* S allius/issue7 (#248)

* report alarm and fault bitfield to ha

* define the alarm and fault names

* configure log path and max number of daily log files (#243)

* configure log path and max number of daily log files

* don't use a subfolder for configs

* use make instead of a build script

* mount /homeassistant/tsun-proxy

* Add venv to base image

* give write access to mounted folder

* intial checkin, ignore SC1091

* set advanced and stage value in config.yaml

* fix typo

* added watchdog and removed Port 8127 from mapping

* fixed typo and use new add-on repro

- change the install button to install from
 https://github.com/s-allius/ha-addons

* add addon-rel target

* disable watchdog due to exceptions in the ha supervisor

* update changelog

---------

Co-authored-by: Michael Metz <michael.metz@siemens.com>

* Update README.md (#251)

install `https://github.com/s-allius/ha-addons` as repro for our add-on

* add german language file (#253)

* fix return type get_extra_info in FakeWriter

* move global startup code into main methdod

* pin version of base image

* avoid forwarding to a private (lokal) IP addr (#256)

* avoid forwarding to a private (lokal) IP addr

* test DNS resolver issues

* increase test coverage

* update changelog

* fix client_mode configuration block (#252)

* fix client_mode block

* add client mode

* fix tests with client_mode values

* log client_mode configuration

* add forward flag for client_mode

* improve startup logging

* added client_mode example

* adjusted translation files

* AT commands added

* typo

* missing "PLUS"

* link to config details

* improve log msg for config problems

* improve log msg on config errors

* improve log msg for config problems

* copy CHANGELOG.md into add-on repro

---------

Co-authored-by: Michael Metz <michael.metz@siemens.com>

* rename "ConfigErr" to match naming convention

* disable test coverage for __main__

* update changelog version 0.12

---------

Co-authored-by: metzi <147942647+mime24@users.noreply.github.com>
Co-authored-by: Michael Metz <michael.metz@siemens.com>
This commit is contained in:
Stefan Allius
2024-12-22 22:25:50 +01:00
committed by GitHub
parent badc065b7a
commit 3bf245300d
37 changed files with 971 additions and 242 deletions

2
.hadolint.yaml Normal file
View File

@@ -0,0 +1,2 @@
ignored:
- SC1091

View File

@@ -7,9 +7,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [unreleased]
## [0.12.0] - 2024-12-22
- add hadolint configuration
- detect usage of a local DNS resolver [#37](https://github.com/s-allius/tsun-gen3-proxy/issues/37)
- path for logs is now configurable by cli args
- configure the number of keeped logfiles by cli args
- add DOCS.md and CHANGELOG.md for add-ons
- pin library version und update them with renovate
- build config.yaml for add-ons by a jinja2 template
- use gnu make to build proxy and add-on
- make the configuration more flexible, add command line args to control this
- fix the python path so we don't need special import paths for unit tests anymore
- support test coverager in vscode
- add emulator mode [#205](https://github.com/s-allius/tsun-gen3-proxy/issues/205)
- ignore inverter replays which a older than 1 day [#246](https://github.com/s-allius/tsun-gen3-proxy/issues/246)
- support test coverage in vscode
- upgrade SonarQube action to version 4
- update github action to Ubuntu 24-04
- add initial support for home assistant add-ons from @mime24

View File

@@ -1,10 +1,14 @@
.PHONY: build clean addon-dev addon-debug sddon-rc
.PHONY: build clean addon-dev addon-debug addon-rc addon-rel debug dev preview rc rel
# debug dev:
# $(MAKE) -C app $@
debug dev preview rc rel:
$(MAKE) -C app $@
clean build:
$(MAKE) -C ha_addons/ha_addon $@
$(MAKE) -C ha_addons $@
addon-dev addon-debug addon-rc addon-rel:
$(MAKE) -C ha_addons $(patsubst addon-%,%,$@)
check-docker-compose:
docker-compose config -q
addon-dev addon-debug addon-rc:
$(MAKE) -C ha_addons/ha_addon $(patsubst addon-%,%,$@)

View File

@@ -9,13 +9,13 @@
<a href="https://www.python.org/downloads/release/python-3120/"><img alt="Supported Python versions" src="https://img.shields.io/badge/python-3.12-blue.svg"></a>
<a href="https://sbtinstruments.github.io/aiomqtt/introduction.html"><img alt="Supported aiomqtt versions" src="https://img.shields.io/badge/aiomqtt-2.3.0-lightblue.svg"></a>
<a href="https://libraries.io/pypi/aiocron"><img alt="Supported aiocron versions" src="https://img.shields.io/badge/aiocron-1.8-lightblue.svg"></a>
<a href="https://toml.io/en/v1.0.0"><img alt="Supported toml versions" src="https://img.shields.io/badge/toml-1.0.0-lightblue.svg"></a>
<a href="https://toml.io/en/v1.0.0"><img alt="Supported toml versions" src="https://img.shields.io/badge/toml-1.0.0-lightblue.svg"></a>
<br>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=alert_status"><img src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=alert_status"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=bugs"><img src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=bugs"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=code_smells"><img src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=code_smells"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=alert_status"><img alt="The quality gate status" src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=alert_status"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=bugs"><img alt="No of bugs" src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=bugs"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=code_smells"><img alt="No of code-smells" src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=code_smells"></a>
<br>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=coverage"><img src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=coverage"></a>
<a href="https://sonarcloud.io/component_measures?id=s-allius_tsun-gen3-proxy&metric=coverage"><img alt="Test coverage in percent" src="https://sonarcloud.io/api/project_badges/measure?project=s-allius_tsun-gen3-proxy&metric=coverage"></a>
</p>
# Overview
@@ -28,6 +28,9 @@ Through this, the inverter then establishes a connection to the proxy and the pr
By means of `docker` a simple installation and operation is possible. By using `docker-composer`, a complete stack of proxy, `MQTT-brocker` and `home-assistant` can be started easily.
Alternatively you can run the TSUN-Proxy as a Home Assistant Add-on. The installation of this add-on is pretty straightforward and not different in comparison to installing any other custom Home Assistant add-on.
Follow the Instructions mentioned in the add-on subdirectory `ha_addons`.
<br>
This project is not related to the company TSUN. It is a private initiative that aims to connect TSUN inverters with an MQTT broker. There is no support and no warranty from TSUN.
<br><br>
@@ -65,11 +68,20 @@ Here are some screenshots of how the inverter is displayed in the Home Assistant
## Requirements
### for Docker Installation
- A running Docker engine to host the container
- Ability to loop the proxy into the connection between the inverter and the TSUN cloud
### for Home Assistant Add-on Installation
- Running Home Assistant on Home Assistant OS or Supervised. Container and Core installations doesn't support add-ons.
- Ability to loop the proxy into the connection between the inverter and the TSUN cloud
# Getting Started
## for Docker Installation
To run the proxy, you first need to create the image. You can do this quite simply as follows:
```sh
@@ -95,8 +107,22 @@ With this information we can customize the `docker run`` statement:
docker run --dns '8.8.8.8' --env 'UID=1050' -p '5005:5005' -p '10000:10000' -v ./config:/home/tsun-proxy/config -v ./log:/home/tsun-proxy/log tsun-proxy
```
## for Home Assistant Add-on Installation
1. Add the repository URL to the Home Assistant add-on store
[![Add repository on my Home Assistant][repository-badge]][repository-url]
2. Reload the add-on store page
3. Click the "Install" button to install the add-on.
# Configuration
```txt
❗The following description applies to the Docker installation. When installing the Home Assistant add-on, the
configuration is carried out via the Home Assistant UI. Some of the options described below are not required for
this. Additionally, creating a config.toml file is not necessary. However, for a general understanding of the
configuration and functionality, it is helpful to read the following description.
```
The configuration consists of several parts. First, the container and the proxy itself must be configured, and then the connection of the inverter to the proxy must be set up, which is done differently depending on the inverter generation
For GEN3PLUS inverters, this can be done easily via the web interface of the inverter. The GEN3 inverters do not have a web interface, so the proxy is integrated via a modified DNS resolution.
@@ -275,7 +301,7 @@ modbus_polling = true # Enable optional MODBUS polling
# if your inverter supports SSL connections you must use the client_mode. Pls, uncomment
# the next line and configure the fixed IP of your inverter
#client_mode = {host = '192.168.0.1', port = 8899}
#client_mode = {host = '192.168.0.1', port = 8899, forward = true}
pv1 = {type = 'RSM40-8-410M', manufacturer = 'Risen'} # Optional, PV module descr
pv2 = {type = 'RSM40-8-410M', manufacturer = 'Risen'} # Optional, PV module descr
@@ -320,7 +346,6 @@ In this case, you MUST NOT change the port or the host address, as this may caus
require a complete reset. Use the configuration in client mode instead.
```
If access to the web interface does not work, it can also be redirected via DNS redirection, as is necessary for the GEN3 inverters.
## Client Mode (GEN3PLUS only)
@@ -408,3 +433,6 @@ We're very happy to receive contributions to this project! You can get started b
## Changelog
The changelog lives in [CHANGELOG.md](https://github.com/s-allius/tsun-gen3-proxy/blob/main/CHANGELOG.md). It follows the principles of [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
[repository-badge]: https://img.shields.io/badge/Add%20repository%20to%20my-Home%20Assistant-41BDF5?logo=home-assistant&style=for-the-badge
[repository-url]: https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Fs-allius%2Fha-addons

View File

@@ -5,13 +5,12 @@ ARG GID=1000
#
# first stage for our base image
FROM python:3.13-alpine AS base
USER root
COPY --chmod=0700 ./hardening_base.sh .
COPY --chmod=0700 ./hardening_base.sh /
RUN apk upgrade --no-cache && \
apk add --no-cache su-exec && \
./hardening_base.sh && \
rm ./hardening_base.sh
apk add --no-cache su-exec=0.2-r3 && \
/hardening_base.sh && \
rm /hardening_base.sh
#
# second stage for building wheels packages
@@ -19,8 +18,8 @@ FROM base AS builder
# copy the dependencies file to the root dir and install requirements
COPY ./requirements.txt /root/
RUN apk add --no-cache build-base && \
python -m pip install --no-cache-dir -U pip wheel && \
RUN apk add --no-cache build-base=0.5-r3 && \
python -m pip install --no-cache-dir pip==24.3.1 wheel==0.45.1 && \
python -OO -m pip wheel --no-cache-dir --wheel-dir=/root/wheels -r /root/requirements.txt
@@ -50,9 +49,9 @@ VOLUME ["/home/$SERVICE_NAME/log", "/home/$SERVICE_NAME/config"]
# and unistall python packages and alpine package manger to reduce attack surface
COPY --from=builder /root/wheels /root/wheels
COPY --chmod=0700 ./hardening_final.sh .
RUN python -m pip install --no-cache --no-index /root/wheels/* && \
RUN python -m pip install --no-cache-dir --no-cache --no-index /root/wheels/* && \
rm -rf /root/wheels && \
python -m pip uninstall --yes setuptools wheel pip && \
python -m pip uninstall --yes wheel pip && \
apk --purge del apk-tools && \
./hardening_final.sh && \
rm ./hardening_final.sh

View File

@@ -1,12 +1,12 @@
#!make
include ../../.env
include ../.env
SHELL = /bin/sh
IMAGE = tsun-gen3-addon
IMAGE = tsun-gen3-proxy
# Folders
SRC=../../app
SRC=.
SRC_PROXY=$(SRC)/src
CNF_PROXY=$(SRC)/config
@@ -33,13 +33,13 @@ PUBLIC_URL := $(shell echo $(PUBLIC_CONTAINER_REGISTRY) | cut -f1 -d/)
PUBLIC_USER :=$(shell echo $(PUBLIC_CONTAINER_REGISTRY) | cut -f2 -d/)
dev debug: build
dev debug:
@echo version: $(VERSION) build-date: $(BUILD_DATE) image: $(PRIVAT_CONTAINER_REGISTRY)$(IMAGE)
export VERSION=$(VERSION)-$@ && \
export IMAGE=$(PRIVAT_CONTAINER_REGISTRY)$(IMAGE) && \
docker buildx bake -f docker-bake.hcl $@
rc: build
preview rc rel:
@echo version: $(VERSION) build-date: $(BUILD_DATE) image: $(PUBLIC_CONTAINER_REGISTRY)$(IMAGE)
@echo login at $(PUBLIC_URL) as $(PUBLIC_USER)
@DO_LOGIN="$(shell echo $(PUBLIC_CR_KEY) | docker login $(PUBLIC_URL) -u $(PUBLIC_USER) --password-stdin)"
@@ -48,15 +48,8 @@ rc: build
docker buildx bake -f docker-bake.hcl $@
build: rootfs
clean:
rm -r -f $(DST_PROXY)
rm -f $(DST)/requirements.txt
rootfs: $(TARGET_FILES) $(CONFIG_FILES) $(DST)/requirements.txt
.PHONY: debug dev build clean rootfs
.PHONY: debug dev preview rc rel
$(CONFIG_FILES): $(DST_PROXY)/% : $(CNF_PROXY)/%

View File

@@ -149,7 +149,7 @@ modbus_polling = true # Enable optional MODBUS polling
# if your inverter supports SSL connections you must use the client_mode. Pls, uncomment
# the next line and configure the fixed IP of your inverter
#client_mode = {host = '192.168.0.1', port = 8899}
#client_mode = {host = '192.168.0.1', port = 8899, forward = true}
pv1 = {type = 'RSM40-8-410M', manufacturer = 'Risen'} # Optional, PV module descr
pv2 = {type = 'RSM40-8-410M', manufacturer = 'Risen'} # Optional, PV module descr

View File

@@ -18,7 +18,7 @@ variable "DESCRIPTION" {
}
target "_common" {
context = "app"
context = "."
dockerfile = "Dockerfile"
args = {
VERSION = "${VERSION}"

View File

@@ -4,4 +4,5 @@
pytest-cov
python-dotenv
mock
coverage
coverage
jinja2-cli

View File

@@ -189,6 +189,7 @@ here. The default config reader is handled in the Config.init method'''
cls.err = f'error: {error}'
logging.error(
f"Can't read from {reader.descr()} => error\n {error}")
return cls.err
logging.info(f'Read from {reader.descr()} => {res}')
return cls.err

View File

@@ -22,4 +22,4 @@ class ConfigReadEnv(ConfigIfc):
return conf
def descr(self):
return "Read environment"
return "environment"

View File

@@ -449,7 +449,7 @@ class Talent(Message):
self.__build_header(0x99)
self.ifc.tx_add(b'\x01')
self.__finish_send_msg()
self.__process_data()
self.__process_data(False)
elif self.ctrl.is_resp():
return # ignore received response
@@ -464,7 +464,7 @@ class Talent(Message):
self.__build_header(0x99)
self.ifc.tx_add(b'\x01')
self.__finish_send_msg()
self.__process_data()
self.__process_data(True)
self.state = State.up # allow MODBUS cmds
if (self.modbus_polling):
self.mb_timer.start(self.mb_first_timeout)
@@ -479,8 +479,14 @@ class Talent(Message):
self.forward()
def __process_data(self):
def __process_data(self, ignore_replay: bool):
msg_hdr_len, ts = self.parse_msg_header()
if ignore_replay:
age = self._utc() - self._utcfromts(ts)
age = age/(3600*24)
logger.debug(f"Age: {age} days")
if age > 1:
return
for key, update in self.db.parse(self.ifc.rx_peek(), self.header_len
+ msg_hdr_len, self.node_id):

View File

@@ -299,37 +299,53 @@ class Infos:
{% set result = 'noAlarm'%}
{%else%}
{% set result = '' %}
{% if val_int | bitwise_and(1)%}{% set result = result + 'Bit1, '%}
{% if val_int | bitwise_and(1)%}
{% set result = result + 'HBridgeFault, '%}
{% endif %}
{% if val_int | bitwise_and(2)%}{% set result = result + 'Bit2, '%}
{% if val_int | bitwise_and(2)%}
{% set result = result + 'DriVoltageFault, '%}
{% endif %}
{% if val_int | bitwise_and(3)%}{% set result = result + 'Bit3, '%}
{% if val_int | bitwise_and(3)%}
{% set result = result + 'GFDI-Fault, '%}
{% endif %}
{% if val_int | bitwise_and(4)%}{% set result = result + 'Bit4, '%}
{% if val_int | bitwise_and(4)%}
{% set result = result + 'OverTemp, '%}
{% endif %}
{% if val_int | bitwise_and(5)%}{% set result = result + 'Bit5, '%}
{% if val_int | bitwise_and(5)%}
{% set result = result + 'CommLose, '%}
{% endif %}
{% if val_int | bitwise_and(6)%}{% set result = result + 'Bit6, '%}
{% if val_int | bitwise_and(6)%}
{% set result = result + 'Bit6, '%}
{% endif %}
{% if val_int | bitwise_and(7)%}{% set result = result + 'Bit7, '%}
{% if val_int | bitwise_and(7)%}
{% set result = result + 'Bit7, '%}
{% endif %}
{% if val_int | bitwise_and(8)%}{% set result = result + 'Bit8, '%}
{% if val_int | bitwise_and(8)%}
{% set result = result + 'EEPROM-Fault, '%}
{% endif %}
{% if val_int | bitwise_and(9)%}{% set result = result + 'noUtility, '%}
{% if val_int | bitwise_and(9)%}
{% set result = result + 'NoUtility, '%}
{% endif %}
{% if val_int | bitwise_and(10)%}{% set result = result + 'Bit10, '%}
{% if val_int | bitwise_and(10)%}
{% set result = result + 'VG_Offset, '%}
{% endif %}
{% if val_int | bitwise_and(11)%}{% set result = result + 'Bit11, '%}
{% if val_int | bitwise_and(11)%}
{% set result = result + 'Relais_Open, '%}
{% endif %}
{% if val_int | bitwise_and(12)%}{% set result = result + 'Bit12, '%}
{% if val_int | bitwise_and(12)%}
{% set result = result + 'Relais_Short, '%}
{% endif %}
{% if val_int | bitwise_and(13)%}{% set result = result + 'Bit13, '%}
{% if val_int | bitwise_and(13)%}
{% set result = result + 'GridVoltOverRating, '%}
{% endif %}
{% if val_int | bitwise_and(14)%}{% set result = result + 'Bit14, '%}
{% if val_int | bitwise_and(14)%}
{% set result = result + 'GridVoltUnderRating, '%}
{% endif %}
{% if val_int | bitwise_and(15)%}{% set result = result + 'Bit15, '%}
{% if val_int | bitwise_and(15)%}
{% set result = result + 'GridFreqOverRating, '%}
{% endif %}
{% if val_int | bitwise_and(16)%}{% set result = result + 'Bit16, '%}
{% if val_int | bitwise_and(16)%}
{% set result = result + 'GridFreqUnderRating, '%}
{% endif %}
{% endif %}
{{ result }}
@@ -345,15 +361,20 @@ class Infos:
{% set result = 'noFault'%}
{%else%}
{% set result = '' %}
{% if val_int | bitwise_and(1)%}{% set result = result + 'Bit1, '%}
{% if val_int | bitwise_and(1)%}
{% set result = result + 'PVOV-Fault (PV OverVolt), '%}
{% endif %}
{% if val_int | bitwise_and(2)%}{% set result = result + 'Bit2, '%}
{% if val_int | bitwise_and(2)%}
{% set result = result + 'PVLV-Fault (PV LowVolt), '%}
{% endif %}
{% if val_int | bitwise_and(3)%}{% set result = result + 'Bit3, '%}
{% if val_int | bitwise_and(3)%}
{% set result = result + 'PV OI-Fault (PV OverCurrent), '%}
{% endif %}
{% if val_int | bitwise_and(4)%}{% set result = result + 'Bit4, '%}
{% if val_int | bitwise_and(4)%}
{% set result = result + 'PV OFV-Fault, '%}
{% endif %}
{% if val_int | bitwise_and(5)%}{% set result = result + 'Bit5, '%}
{% if val_int | bitwise_and(5)%}
{% set result = result + 'DC ShortCircuitFault, '%}
{% endif %}
{% if val_int | bitwise_and(6)%}{% set result = result + 'Bit6, '%}
{% endif %}

View File

@@ -6,6 +6,7 @@ import json
import gc
from aiomqtt import MqttCodeError
from asyncio import StreamReader, StreamWriter
from ipaddress import ip_address
from inverter_ifc import InverterIfc
from proxy import Proxy
@@ -101,6 +102,20 @@ class InverterBase(InverterIfc, Proxy):
logging.info(f'[{stream.node_id}] Connect to {addr}')
connect = asyncio.open_connection(host, port)
reader, writer = await connect
r_addr = writer.get_extra_info('peername')
if r_addr is not None:
(ip, _) = r_addr
if ip_address(ip).is_private:
logging.error(
f"""resolve {host} to {ip}, which is a private IP!
\u001B[31m Check your DNS settings and use a public DNS resolver!
To prevent a possible loop, forwarding to local IP addresses is
not supported and is deactivated for subsequent connections
\u001B[0m
""")
Config.act_config[self.config_id]['enabled'] = False
ifc = AsyncStreamClient(
reader, writer, self.local, self.__del_remote)

View File

@@ -58,13 +58,13 @@ formatter=console_formatter
class=handlers.TimedRotatingFileHandler
level=INFO
formatter=file_formatter
args=('log/proxy.log', when:='midnight')
args=(handlers.log_path + 'proxy.log', when:='midnight', backupCount:=handlers.log_backups)
[handler_file_handler_name2]
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=file_formatter
args=('log/trace.log', when:='midnight')
args=(handlers.log_path + 'trace.log', when:='midnight', backupCount:=handlers.log_backups)
[formatter_console_formatter]
format=%(asctime)s %(levelname)5s | %(name)4s | %(message)s'

View File

@@ -49,7 +49,7 @@ class ModbusTcp():
and 'monitor_sn' in inv
and 'client_mode' in inv):
client = inv['client_mode']
# logging.info(f"SerialNo:{inv['monitor_sn']} host:{client['host']} port:{client['port']}") # noqa: E501
logger.info(f"'client_mode' for snr: {inv['monitor_sn']} host: {client['host']}:{client['port']}, forward: {client['forward']}") # noqa: E501
loop.create_task(self.modbus_loop(client['host'],
client['port'],
inv['monitor_sn'],

View File

@@ -1,5 +1,6 @@
import logging
import asyncio
import logging.handlers
import signal
import os
import argparse
@@ -81,7 +82,7 @@ async def handle_client(reader: StreamReader, writer: StreamWriter, inv_class):
await inv.local.ifc.server_loop()
async def handle_shutdown(web_task):
async def handle_shutdown(loop, web_task):
'''Close all TCP connections and stop the event loop'''
logging.info('Shutdown due to SIGTERM')
@@ -131,16 +132,21 @@ def get_log_level() -> int:
return log_level
if __name__ == "__main__": # pragma: no cover
def main(): # pragma: no cover
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--config_path', type=str,
parser.add_argument('-c', '--config_path', type=str,
default='./config/',
help='set path for the configuration files')
parser.add_argument('-j', '--json_config', type=str,
help='read user config from json-file')
parser.add_argument('-t', '--toml_config', type=str,
help='read user config from toml-file')
parser.add_argument('--add_on', action='store_true')
parser.add_argument('-l', '--log_path', type=str,
default='./log/',
help='set path for the logging files')
parser.add_argument('-b', '--log_backups', type=int,
default=0,
help='set max number of daily log-files')
args = parser.parse_args()
#
# Setup our daily, rotating logger
@@ -148,12 +154,20 @@ if __name__ == "__main__": # pragma: no cover
serv_name = os.getenv('SERVICE_NAME', 'proxy')
version = os.getenv('VERSION', 'unknown')
setattr(logging.handlers, "log_path", args.log_path)
setattr(logging.handlers, "log_backups", args.log_backups)
logging.config.fileConfig('logging.ini')
logging.info(f'Server "{serv_name} - {version}" will be started')
logging.info(f"AddOn: {args.add_on}")
logging.info(f'current dir: {os.getcwd()}')
logging.info(f"config_path: {args.config_path}")
logging.info(f"json_config: {args.json_config}")
logging.info(f"toml_config: {args.toml_config}")
logging.info(f"log_path: {args.log_path}")
if args.log_backups == 0:
logging.info("log_backups: unlimited")
else:
logging.info(f"log_backups: {args.log_backups} days")
log_level = get_log_level()
logging.info('******')
@@ -176,10 +190,12 @@ if __name__ == "__main__": # pragma: no cover
ConfigReadToml(args.config_path + "config.toml")
ConfigReadJson(args.json_config)
ConfigReadToml(args.toml_config)
ConfigErr = Config.get_error()
config_err = Config.get_error()
if config_err is not None:
logging.info(f'config_err: {config_err}')
return
if ConfigErr is not None:
logging.info(f'ConfigErr: {ConfigErr}')
logging.info('******')
Proxy.class_init()
@@ -192,6 +208,7 @@ if __name__ == "__main__": # pragma: no cover
# and we can't receive and handle the UNIX signals!
#
for inv_class, port in [(InverterG3, 5005), (InverterG3P, 10000)]:
logging.info(f'listen on port: {port} for inverters')
loop.create_task(asyncio.start_server(lambda r, w, i=inv_class:
handle_client(r, w, i),
'0.0.0.0', port))
@@ -204,12 +221,12 @@ if __name__ == "__main__": # pragma: no cover
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
lambda loop=loop: asyncio.create_task(
handle_shutdown(web_task)))
handle_shutdown(loop, web_task)))
loop.set_debug(log_level == logging.DEBUG)
try:
if ConfigErr is None:
proxy_is_up = True
global proxy_is_up
proxy_is_up = True
loop.run_forever()
except KeyboardInterrupt:
pass
@@ -219,3 +236,7 @@ if __name__ == "__main__": # pragma: no cover
logging.debug('Close event loop')
loop.close()
logging.info(f'Finally, exit Server "{serv_name}"')
if __name__ == "__main__": # pragma: no cover
main()

View File

@@ -195,10 +195,10 @@ def test_cnv4():
"node_id": "PV-Garage/",
"suggested_area": "Garage",
"modbus_polling": False,
"pv1_manufacturer": "man1",
"pv1_type": "type1",
"pv2_manufacturer": "man2",
"pv2_type": "type2",
"pv1.manufacturer": "man1",
"pv1.type": "type1",
"pv2.manufacturer": "man2",
"pv2.type": "type2",
"sensor_list": 688
},
{
@@ -207,16 +207,17 @@ def test_cnv4():
"node_id": "PV-Garage2/",
"suggested_area": "Garage2",
"modbus_polling": True,
"client_mode_host": "InverterIP",
"client_mode_port": 1234,
"pv1_manufacturer": "man1",
"pv1_type": "type1",
"pv2_manufacturer": "man2",
"pv2_type": "type2",
"pv3_manufacturer": "man3",
"pv3_type": "type3",
"pv4_manufacturer": "man4",
"pv4_type": "type4",
"client_mode.host": "InverterIP",
"client_mode.port": 1234,
"client_mode.forward": True,
"pv1.manufacturer": "man1",
"pv1.type": "type1",
"pv2.manufacturer": "man2",
"pv2.type": "type2",
"pv3.manufacturer": "man3",
"pv3.type": "type3",
"pv4.manufacturer": "man4",
"pv4.type": "type4",
"sensor_list": 688
}
],
@@ -247,25 +248,33 @@ def test_cnv4():
'block': ['AT+SUPDATE']}}},
'inverters': {'R170000000000001': {'modbus_polling': False,
'node_id': 'PV-Garage/',
'pv1_manufacturer': 'man1',
'pv1_type': 'type1',
'pv2_manufacturer': 'man2',
'pv2_type': 'type2',
'pv1': {
'manufacturer': 'man1',
'type': 'type1'},
'pv2': {
'manufacturer': 'man2',
'type': 'type2'},
'sensor_list': 688,
'suggested_area': 'Garage'},
'Y170000000000001': {'client_mode_host': 'InverterIP',
'client_mode_port': 1234,
'Y170000000000001': {'client_mode': {
'host': 'InverterIP',
'port': 1234,
'forward': True},
'modbus_polling': True,
'monitor_sn': 2000000000,
'node_id': 'PV-Garage2/',
'pv1_manufacturer': 'man1',
'pv1_type': 'type1',
'pv2_manufacturer': 'man2',
'pv2_type': 'type2',
'pv3_manufacturer': 'man3',
'pv3_type': 'type3',
'pv4_manufacturer': 'man4',
'pv4_type': 'type4',
'pv1': {
'manufacturer': 'man1',
'type': 'type1'},
'pv2': {
'manufacturer': 'man2',
'type': 'type2'},
'pv3': {
'manufacturer': 'man3',
'type': 'type3'},
'pv4': {
'manufacturer': 'man4',
'type': 'type4'},
'sensor_list': 688,
'suggested_area': 'Garage2'},
'allow_all': False},
@@ -362,8 +371,6 @@ def test_full_config(ConfigComplete):
"node_id": "PV-Garage2/",
"suggested_area": "Garage2",
"modbus_polling": true,
"client_mode_host": "InverterIP",
"client_mode_port": 1234,
"pv1.manufacturer": "man1",
"pv1.type": "type1",
"pv2.manufacturer": "man2",

View File

@@ -54,11 +54,12 @@ class FakeReader():
class FakeWriter():
peer = ('47.1.2.3', 10000)
def write(self, buf: bytes):
return
def get_extra_info(self, sel: str):
if sel == 'peername':
return 'remote.intern'
return self.peer
elif sel == 'sockname':
return 'sock:1234'
assert False
@@ -241,6 +242,118 @@ async def test_remote_conn(config_conn, patch_open_connection):
cnt += 1
assert cnt == 0
@pytest.mark.asyncio
async def test_remote_conn_to_private(config_conn, patch_open_connection):
'''check DNS resolving of the TSUN FQDN to a local address'''
_ = config_conn
_ = patch_open_connection
assert asyncio.get_running_loop()
InverterBase._registry.clear()
reader = FakeReader()
writer = FakeWriter()
FakeWriter.peer = ("192.168.0.1", 10000)
with InverterBase(reader, writer, 'tsun', Talent) as inverter:
assert inverter.local.stream
assert inverter.local.ifc
await inverter.create_remote()
await asyncio.sleep(0)
assert not Config.act_config['tsun']['enabled']
assert inverter.remote.stream
assert inverter.remote.ifc
assert inverter.local.ifc.healthy()
# outside context manager the unhealth AsyncStream is released
FakeWriter.peer = ("47.1.2.3", 10000)
cnt = 0
for inv in InverterBase:
assert inv.healthy() # inverter is healthy again (without the unhealty AsyncStream)
cnt += 1
del inv
assert cnt == 1
del inverter
cnt = 0
for inv in InverterBase:
print(f'InverterBase refs:{gc.get_referrers(inv)}')
cnt += 1
assert cnt == 0
@pytest.mark.asyncio
async def test_remote_conn_to_loopback(config_conn, patch_open_connection):
'''check DNS resolving of the TSUN FQDN to the loopback address'''
_ = config_conn
_ = patch_open_connection
assert asyncio.get_running_loop()
InverterBase._registry.clear()
reader = FakeReader()
writer = FakeWriter()
FakeWriter.peer = ("127.0.0.1", 10000)
with InverterBase(reader, writer, 'tsun', Talent) as inverter:
assert inverter.local.stream
assert inverter.local.ifc
await inverter.create_remote()
await asyncio.sleep(0)
assert not Config.act_config['tsun']['enabled']
assert inverter.remote.stream
assert inverter.remote.ifc
assert inverter.local.ifc.healthy()
# outside context manager the unhealth AsyncStream is released
FakeWriter.peer = ("47.1.2.3", 10000)
cnt = 0
for inv in InverterBase:
assert inv.healthy() # inverter is healthy again (without the unhealty AsyncStream)
cnt += 1
del inv
assert cnt == 1
del inverter
cnt = 0
for inv in InverterBase:
print(f'InverterBase refs:{gc.get_referrers(inv)}')
cnt += 1
assert cnt == 0
@pytest.mark.asyncio
async def test_remote_conn_to_None(config_conn, patch_open_connection):
'''check if get_extra_info() return None in case of an error'''
_ = config_conn
_ = patch_open_connection
assert asyncio.get_running_loop()
InverterBase._registry.clear()
reader = FakeReader()
writer = FakeWriter()
FakeWriter.peer = None
with InverterBase(reader, writer, 'tsun', Talent) as inverter:
assert inverter.local.stream
assert inverter.local.ifc
await inverter.create_remote()
await asyncio.sleep(0)
assert Config.act_config['tsun']['enabled']
assert inverter.remote.stream
assert inverter.remote.ifc
assert inverter.local.ifc.healthy()
# outside context manager the unhealth AsyncStream is released
FakeWriter.peer = ("47.1.2.3", 10000)
cnt = 0
for inv in InverterBase:
assert inv.healthy() # inverter is healthy again (without the unhealty AsyncStream)
cnt += 1
del inv
assert cnt == 1
del inverter
cnt = 0
for inv in InverterBase:
print(f'InverterBase refs:{gc.get_referrers(inv)}')
cnt += 1
assert cnt == 0
@pytest.mark.asyncio
async def test_unhealthy_remote(config_conn, patch_open_connection, patch_unhealthy_remote):
_ = config_conn

View File

@@ -59,7 +59,7 @@ class FakeWriter():
return
def get_extra_info(self, sel: str):
if sel == 'peername':
return 'remote.intern'
return ('47.1.2.3', 10000)
elif sel == 'sockname':
return 'sock:1234'
assert False

View File

@@ -58,7 +58,7 @@ class FakeWriter():
return
def get_extra_info(self, sel: str):
if sel == 'peername':
return 'remote.intern'
return ('47.1.2.3', 10000)
elif sel == 'sockname':
return 'sock:1234'
assert False
@@ -94,7 +94,8 @@ def patch_open_connection():
with patch.object(asyncio, 'open_connection', new_open) as conn:
yield conn
def test_method_calls():
def test_method_calls(config_conn):
_ = config_conn
reader = FakeReader()
writer = FakeWriter()
InverterBase._registry.clear()

View File

@@ -328,6 +328,90 @@ def msg_inverter_ind_new(): # Data indication from DSP V5.0.17
msg += b'\x00\x00\x00\x00'
return msg
@pytest.fixture
def msg_inverter_ind_new2(): # Data indication from DSP V5.0.17
msg = b'\x00\x00\x04\xf4\x10R170000000000001\x91\x04\x01\x90\x00\x01\x10R170000000000001'
msg += b'\x01\x00\x00\x01'
msg += b'\x86\x98\x55\xe7\x48\x00\x00\x00\xa3\x00\x00\x01\x93\x53\x00\x00'
msg += b'\x00\x00\x01\x94\x53\x00\x00\x00\x00\x01\x95\x53\x00\x00\x00\x00'
msg += b'\x01\x96\x53\x00\x00\x00\x00\x01\x97\x53\x00\x00\x00\x00\x01\x98'
msg += b'\x53\x00\x00\x00\x00\x01\x99\x53\x00\x00\x00\x00\x01\x9a\x53\x00'
msg += b'\x00\x00\x00\x01\x9b\x53\x00\x00\x00\x00\x01\x9c\x53\x00\x00\x00'
msg += b'\x00\x01\x9d\x53\x00\x00\x00\x00\x01\x9e\x53\x00\x00\x00\x00\x01'
msg += b'\x9f\x53\x00\x00\x00\x00\x01\xa0\x53\x00\x00\x00\x00\x01\xf4\x49'
msg += b'\x00\x00\x00\x00\x00\x00\x01\xf5\x53\x00\x00\x00\x00\x01\xf6\x53'
msg += b'\x00\x00\x00\x00\x01\xf7\x53\x00\x00\x00\x00\x01\xf8\x53\x00\x00'
msg += b'\x00\x00\x01\xf9\x53\x00\x00\x00\x00\x01\xfa\x53\x00\x00\x00\x00'
msg += b'\x01\xfb\x53\x00\x00\x00\x00\x01\xfc\x53\x00\x00\x00\x00\x01\xfd'
msg += b'\x53\x00\x00\x00\x00\x01\xfe\x53\x00\x00\x00\x00\x01\xff\x53\x00'
msg += b'\x00\x00\x00\x02\x00\x53\x00\x00\x00\x00\x02\x01\x53\x00\x00\x00'
msg += b'\x00\x02\x02\x53\x00\x00\x00\x00\x02\x03\x53\x00\x00\x00\x00\x02'
msg += b'\x04\x53\x00\x00\x00\x00\x02\x58\x49\x00\x00\x00\x00\x00\x00\x02'
msg += b'\x59\x53\x00\x00\x00\x00\x02\x5a\x53\x00\x00\x00\x00\x02\x5b\x53'
msg += b'\x00\x00\x00\x00\x02\x5c\x53\x00\x00\x00\x00\x02\x5d\x53\x00\x00'
msg += b'\x00\x00\x02\x5e\x53\x00\x00\x00\x00\x02\x5f\x53\x00\x00\x00\x00'
msg += b'\x02\x60\x53\x00\x00\x00\x00\x02\x61\x53\x00\x00\x00\x00\x02\x62'
msg += b'\x53\x00\x00\x00\x00\x02\x63\x53\x00\x00\x00\x00\x02\x64\x53\x00'
msg += b'\x00\x00\x00\x02\x65\x53\x00\x00\x00\x00\x02\x66\x53\x00\x00\x00'
msg += b'\x00\x02\x67\x53\x00\x00\x00\x00\x02\x68\x53\x00\x00\x00\x00\x02'
msg += b'\xbc\x49\x00\x00\x00\x00\x00\x00\x02\xbd\x53\x00\x00\x00\x00\x02'
msg += b'\xbe\x53\x00\x00\x00\x00\x02\xbf\x53\x00\x00\x00\x00\x02\xc0\x53'
msg += b'\x00\x00\x00\x00\x02\xc1\x53\x00\x00\x00\x00\x02\xc2\x53\x00\x00'
msg += b'\x00\x00\x02\xc3\x53\x00\x00\x00\x00\x02\xc4\x53\x00\x00\x00\x00'
msg += b'\x02\xc5\x53\x00\x00\x00\x00\x02\xc6\x53\x00\x00\x00\x00\x02\xc7'
msg += b'\x53\x00\x00\x00\x00\x02\xc8\x53\x00\x00\x00\x00\x02\xc9\x53\x00'
msg += b'\x00\x00\x00\x02\xca\x53\x00\x00\x00\x00\x02\xcb\x53\x00\x00\x00'
msg += b'\x00\x02\xcc\x53\x00\x00\x00\x00\x03\x20\x53\x00\x00\x00\x00\x03'
msg += b'\x84\x53\x50\x11\x00\x00\x03\xe8\x46\x43\x65\xcc\xcd\x00\x00\x04'
msg += b'\x4c\x46\x40\x0c\xcc\xcd\x00\x00\x04\xb0\x46\x42\x47\xd7\x0a\x00'
msg += b'\x00\x05\x14\x53\x00\x35\x00\x00\x05\x78\x53\x00\x00\x00\x00\x05'
msg += b'\xdc\x53\x03\x20\x00\x00\x06\x40\x46\x43\xfd\x4c\xcd\x00\x00\x06'
msg += b'\xa4\x46\x42\x18\x00\x00\x00\x00\x07\x08\x46\x40\xde\x14\x7b\x00'
msg += b'\x00\x07\x6c\x46\x43\x84\x33\x33\x00\x00\x07\xd0\x46\x42\x1a\x00'
msg += b'\x00\x00\x00\x08\x34\x46\x40\xda\x8f\x5c\x00\x00\x08\x98\x46\x43'
msg += b'\x83\xb3\x33\x00\x00\x08\xfc\x46\x00\x00\x00\x00\x00\x00\x09\x60'
msg += b'\x46\x00\x00\x00\x00\x00\x00\x09\xc4\x46\x00\x00\x00\x00\x00\x00'
msg += b'\x0a\x28\x46\x00\x00\x00\x00\x00\x00\x0a\x8c\x46\x00\x00\x00\x00'
msg += b'\x00\x00\x0a\xf0\x46\x00\x00\x00\x00\x00\x00\x0b\x54\x46\x40\x9c'
msg += b'\xcc\xcd\x00\x00\x0b\xb8\x46\x43\xea\xb5\xc3\x00\x00\x0c\x1c\x46'
msg += b'\x40\x1e\xb8\x52\x00\x00\x0c\x80\x46\x43\x6d\x2b\x85\x00\x00\x0c'
msg += b'\xe4\x46\x40\x1a\xe1\x48\x00\x00\x0d\x48\x46\x43\x68\x40\x00\x00'
msg += b'\x00\x0d\xac\x46\x00\x00\x00\x00\x00\x00\x0e\x10\x46\x00\x00\x00'
msg += b'\x00\x00\x00\x0e\x74\x46\x00\x00\x00\x00\x00\x00\x0e\xd8\x46\x00'
msg += b'\x00\x00\x00\x00\x00\x0f\x3c\x53\x00\x00\x00\x00\x0f\xa0\x53\x00'
msg += b'\x00\x00\x00\x10\x04\x53\x55\xaa\x00\x00\x10\x68\x53\x00\x01\x00'
msg += b'\x00\x10\xcc\x53\x00\x00\x00\x00\x11\x30\x53\x00\x00\x00\x00\x11'
msg += b'\x94\x53\x00\x00\x00\x00\x11\xf8\x53\xff\xff\x00\x00\x12\x5c\x53'
msg += b'\xff\xff\x00\x00\x12\xc0\x53\x00\x00\x00\x00\x13\x24\x53\xff\xff'
msg += b'\x00\x00\x13\x88\x53\xff\xff\x00\x00\x13\xec\x53\xff\xff\x00\x00'
msg += b'\x14\x50\x53\xff\xff\x00\x00\x14\xb4\x53\xff\xff\x00\x00\x15\x18'
msg += b'\x53\xff\xff\x00\x00\x15\x7c\x53\x00\x00\x00\x00\x27\x10\x53\x00'
msg += b'\x02\x00\x00\x27\x74\x53\x00\x3c\x00\x00\x27\xd8\x53\x00\x68\x00'
msg += b'\x00\x28\x3c\x53\x05\x00\x00\x00\x28\xa0\x46\x43\x79\x00\x00\x00'
msg += b'\x00\x29\x04\x46\x43\x48\x00\x00\x00\x00\x29\x68\x46\x42\x48\x33'
msg += b'\x33\x00\x00\x29\xcc\x46\x42\x3e\x3d\x71\x00\x00\x2a\x30\x53\x00'
msg += b'\x01\x00\x00\x2a\x94\x46\x43\x37\x00\x00\x00\x00\x2a\xf8\x46\x42'
msg += b'\xce\x00\x00\x00\x00\x2b\x5c\x53\x00\x96\x00\x00\x2b\xc0\x53\x00'
msg += b'\x10\x00\x00\x2c\x24\x46\x43\x90\x00\x00\x00\x00\x2c\x88\x46\x43'
msg += b'\x95\x00\x00\x00\x00\x2c\xec\x53\x00\x06\x00\x00\x2d\x50\x53\x00'
msg += b'\x06\x00\x00\x2d\xb4\x46\x43\x7d\x00\x00\x00\x00\x2e\x18\x46\x42'
msg += b'\x3d\xeb\x85\x00\x00\x2e\x7c\x46\x42\x3d\xeb\x85\x00\x00\x2e\xe0'
msg += b'\x53\x00\x03\x00\x00\x2f\x44\x53\x00\x03\x00\x00\x2f\xa8\x46\x42'
msg += b'\x4d\xeb\x85\x00\x00\x30\x0c\x46\x42\x4d\xeb\x85\x00\x00\x30\x70'
msg += b'\x53\x00\x03\x00\x00\x30\xd4\x53\x00\x03\x00\x00\x31\x38\x46\x42'
msg += b'\x08\x00\x00\x00\x00\x31\x9c\x53\x00\x05\x00\x00\x32\x00\x53\x04'
msg += b'\x00\x00\x00\x32\x64\x53\x00\x01\x00\x00\x32\xc8\x53\x13\x9c\x00'
msg += b'\x00\x33\x2c\x53\x0f\xa0\x00\x00\x33\x90\x53\x00\x4f\x00\x00\x33'
msg += b'\xf4\x53\x00\x66\x00\x00\x34\x58\x53\x03\xe8\x00\x00\x34\xbc\x53'
msg += b'\x04\x00\x00\x00\x35\x20\x53\x00\x00\x00\x00\x35\x84\x53\x00\x00'
msg += b'\x00\x00\x35\xe8\x53\x00\x00\x00\x00\x36\x4c\x53\x00\x00\x00\x01'
msg += b'\x38\x80\x53\x00\x02\x00\x01\x38\x81\x53\x00\x01\x00\x01\x38\x82'
msg += b'\x53\x00\x01\x00\x01\x38\x83\x53\x00\x00\x00\x00\x00\x0a\x08\x00'
msg += b'\x00\x00\x00\x00\x00\x00\x00\x14\x04\x00\x00\x00\x00\x00\x00\x00'
msg += b'\x00\x1e\x07\x00\x00\x00\x00\x00'
return msg
@pytest.fixture
def msg_inverter_ind_0w(): # Data indication with 0.5W grid output
msg = b'\x00\x00\x05\x02\x10R170000000000001\x91\x04\x01\x90\x00\x01\x10R170000000000001'
@@ -2151,3 +2235,34 @@ def test_timeout(config_tsun_inv1):
m.modbus_polling = False
assert Talent.MAX_DEF_IDLE_TIME == m._timeout()
m.close()
def test_msg_inv_replay(config_tsun_inv1, msg_inverter_ind_0w, msg_inverter_ind_new2):
'''replay must be ignores, since HA only supports realtime values'''
_ = config_tsun_inv1
m = MemoryStream(msg_inverter_ind_0w, (0,)) # realtime msg with 0.5W Output Power
m.append_msg(msg_inverter_ind_new2) # replay msg with 506.6W Output Power
m.db.db['grid'] = {'Output_Power': 100}
m.db.stat['proxy']['Unknown_Ctrl'] = 0
m.db.stat['proxy']['Invalid_Data_Type'] = 0
m.read() # read complete msg, and dispatch msg
assert m.db.stat['proxy']['Unknown_Ctrl'] == 0
assert m.db.stat['proxy']['Invalid_Data_Type'] == 0
assert not m.header_valid # must be invalid, since msg was handled and buffer flushed
assert m.msg_count == 2
assert m.msg_recvd[0]['ctrl']==145
assert m.msg_recvd[0]['msg_id']==4
assert m.msg_recvd[0]['header_len']==23
assert m.msg_recvd[0]['data_len']==1263
assert m.msg_recvd[1]['ctrl']==145
assert m.msg_recvd[1]['msg_id']==4
assert m.msg_recvd[1]['header_len']==23
assert m.msg_recvd[1]['data_len']==1249
assert m.id_str == b"R170000000000001"
assert m.unique_id == 'R170000000000001'
assert m.db.get_db_value(Register.INVERTER_STATUS) == 1
assert isclose(m.db.db['grid']['Output_Power'], 0.5) # must be 0.5W not 100W nor 506.6W
m.close()
assert m.db.get_db_value(Register.INVERTER_STATUS) == 0

View File

@@ -1,51 +0,0 @@
#!/bin/bash
# Usage: ./build.sh [dev|rc|rel]
# dev: development build
# rc: release candidate build
# rel: release build and push to ghcr.io
# Note: for release build, you need to set GHCR_TOKEN
# export GHCR_TOKEN=<YOUR_GITHUB_TOKEN> in your .zprofile
# see also: https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
set -e
BUILD_DATE=$(date -Iminutes)
BRANCH=$(git rev-parse --abbrev-ref HEAD)
VERSION=$(git describe --tags --abbrev=0)
VERSION="${VERSION:1}"
arr=(${VERSION//./ })
MAJOR=${arr[0]}
IMAGE=tsun-gen3-proxy
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m'
if [[ $1 == debug ]] || [[ $1 == dev ]] ;then
IMAGE=docker.io/sallius/${IMAGE}
VERSION=${VERSION}+$1
elif [[ $1 == rc ]] || [[ $1 == rel ]] || [[ $1 == preview ]] ;then
IMAGE=ghcr.io/s-allius/${IMAGE}
echo 'login to ghcr.io'
echo $GHCR_TOKEN | docker login ghcr.io -u s-allius --password-stdin
else
echo argument missing!
echo try: $0 '[debug|dev|preview|rc|rel]'
exit 1
fi
export IMAGE
export VERSION
export BUILD_DATE
export BRANCH
export MAJOR
echo version: $VERSION build-date: $BUILD_DATE image: $IMAGE
docker buildx bake -f app/docker-bake.hcl $1
echo -e "${BLUE} => checking docker-compose.yaml file${NC}"
docker-compose config -q
echo
echo -e "${GREEN}${BUILD_DATE} => Version: ${VERSION}${NC} finished"
echo

2
ha_addons/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.data.json
config.yaml

136
ha_addons/Makefile Normal file
View File

@@ -0,0 +1,136 @@
#!make
include ../.env
.PHONY: debug dev build clean rootfs repro rc rel
SHELL = /bin/sh
JINJA = jinja2
IMAGE = tsun-gen3-addon
# Folders
SRC=../app
SRC_PROXY=$(SRC)/src
CNF_PROXY=$(SRC)/config
ADDON_PATH = ha_addon
DST=$(ADDON_PATH)/rootfs
DST_PROXY=$(DST)/home/proxy
INST_BASE=../../ha-addons/ha-addons
TEMPL=templates
# collect source files
SRC_FILES := $(wildcard $(SRC_PROXY)/*.py)\
$(wildcard $(SRC_PROXY)/*.ini)\
$(wildcard $(SRC_PROXY)/cnf/*.py)\
$(wildcard $(SRC_PROXY)/gen3/*.py)\
$(wildcard $(SRC_PROXY)/gen3plus/*.py)
CNF_FILES := $(wildcard $(CNF_PROXY)/*.toml)
# determine destination files
TARGET_FILES = $(SRC_FILES:$(SRC_PROXY)/%=$(DST_PROXY)/%)
CONFIG_FILES = $(CNF_FILES:$(CNF_PROXY)/%=$(DST_PROXY)/%)
export BUILD_DATE := ${shell date -Iminutes}
VERSION := $(shell cat $(SRC)/.version)
export MAJOR := $(shell echo $(VERSION) | cut -f1 -d.)
PUBLIC_URL := $(shell echo $(PUBLIC_CONTAINER_REGISTRY) | cut -f1 -d/)
PUBLIC_USER :=$(shell echo $(PUBLIC_CONTAINER_REGISTRY) | cut -f2 -d/)
dev debug: build
@echo version: $(VERSION) build-date: $(BUILD_DATE) image: $(PRIVAT_CONTAINER_REGISTRY)$(IMAGE)
export VERSION=$(VERSION)-$@ && \
export IMAGE=$(PRIVAT_CONTAINER_REGISTRY)$(IMAGE) && \
docker buildx bake -f docker-bake.hcl $@
rc rel: build
@echo version: $(VERSION) build-date: $(BUILD_DATE) image: $(PUBLIC_CONTAINER_REGISTRY)$(IMAGE)
@echo login at $(PUBLIC_URL) as $(PUBLIC_USER)
@DO_LOGIN="$(shell echo $(PUBLIC_CR_KEY) | docker login $(PUBLIC_URL) -u $(PUBLIC_USER) --password-stdin)"
export VERSION=$(VERSION)-$@ && \
export IMAGE=$(PUBLIC_CONTAINER_REGISTRY)$(IMAGE) && \
docker buildx bake -f docker-bake.hcl $@
build: rootfs $(ADDON_PATH)/config.yaml repro
clean:
rm -r -f $(DST_PROXY)
rm -f $(DST)/requirements.txt
rm -f $(ADDON_PATH)/config.yaml
rm -f $(TEMPL)/.data.json
#
# Build rootfs and config.yaml as local add-on
# The rootfs is needed to build the add-on Dockercontainers
#
rootfs: $(TARGET_FILES) $(CONFIG_FILES) $(DST)/requirements.txt
STAGE=dev
debug : STAGE=debug
rc : STAGE=rc
rel : STAGE=rel
$(CONFIG_FILES): $(DST_PROXY)/% : $(CNF_PROXY)/%
@echo Copy $< to $@
@mkdir -p $(@D)
@cp $< $@
$(TARGET_FILES): $(DST_PROXY)/% : $(SRC_PROXY)/%
@echo Copy $< to $@
@mkdir -p $(@D)
@cp $< $@
$(DST)/requirements.txt : $(SRC)/requirements.txt
@echo Copy $< to $@
@cp $< $@
$(ADDON_PATH)/%.yaml: $(TEMPL)/%.jinja $(TEMPL)/.data.json
$(JINJA) --strict --format=json $^ -o $@
$(TEMPL)/.data.json: FORCE
rsync --checksum $(TEMPL)/$(STAGE)_data.json $@
FORCE : ;
#
# Build repository for Home Assistant Add-On
#
INST=$(INST_BASE)/ha_addon_dev
repro_files = DOCS.md icon.png logo.png translations/de.yaml translations/en.yaml
repro_root = CHANGELOG.md
repro_templates = config.yaml
repro_subdirs = translations
repro_vers = debug dev rel
repro_all_files := $(foreach dir,$(repro_vers), $(foreach file,$(repro_files),$(INST_BASE)/ha_addon_$(dir)/$(file)))
repro_root_files := $(foreach dir,$(repro_vers), $(foreach file,$(repro_root),$(INST_BASE)/ha_addon_$(dir)/$(file)))
repro_all_templates := $(foreach dir,$(repro_vers), $(foreach file,$(repro_templates),$(INST_BASE)/ha_addon_$(dir)/$(file)))
repro_all_subdirs := $(foreach dir,$(repro_vers), $(foreach file,$(repro_subdirs),$(INST_BASE)/ha_addon_$(dir)/$(file)))
repro: $(repro_all_subdirs) $(repro_all_templates) $(repro_all_files) $(repro_root_files)
$(repro_all_subdirs) :
mkdir -p $@
$(repro_all_templates) : $(INST_BASE)/ha_addon_%/config.yaml: $(TEMPL)/config.jinja $(TEMPL)/%_data.json $(SRC)/.version
$(JINJA) --strict -D AppVersion=$(VERSION) $< $(filter %.json,$^) -o $@
$(repro_root_files) : %/CHANGELOG.md : ../CHANGELOG.md
cp $< $@
$(filter $(INST_BASE)/ha_addon_debug/%,$(repro_all_files)) : $(INST_BASE)/ha_addon_debug/% : ha_addon/%
cp $< $@
$(filter $(INST_BASE)/ha_addon_dev/%,$(repro_all_files)) : $(INST_BASE)/ha_addon_dev/% : ha_addon/%
cp $< $@
$(filter $(INST_BASE)/ha_addon_rel/%,$(repro_all_files)) : $(INST_BASE)/ha_addon_rel/% : ha_addon/%
cp $< $@

View File

@@ -18,7 +18,7 @@ variable "DESCRIPTION" {
}
target "_common" {
context = "."
context = "ha_addon"
dockerfile = "Dockerfile"
args = {
VERSION = "${VERSION}"

162
ha_addons/ha_addon/DOCS.md Normal file
View File

@@ -0,0 +1,162 @@
# Home Assistant Add-on: TSUN Proxy
[TSUN Proxy][tsunproxy] enables a reliable connection between TSUN third generation
inverters and an MQTT broker. With the proxy, you can easily retrieve real-time values
such as power, current and daily energy and integrate the inverter into Home Assistant.
This works even without an internet connection.
The optional connection to the TSUN Cloud can be disabled!
## Pre-requisites
1. This Add-on requires an MQTT broker to work.
For a typical installation, we recommend the [Mosquitto add-on][Mosquitto] running on your Home Assistant.
2. You need to loop the proxy into the connection between the inverter and the TSUN Cloud,
you must adapt the DNS record within the network that your inverter uses. You need a mapping
from logger.talent-monitoring.com and/or iot.talent-monitoring.com to the IP address of your
Home Assistant.
This can be done, for example, by adding a local DNS record to [AdGuard Home Add-on][AdGuard]
(navigate to `filters` on the AdGuard panel and add an entry under `custom filtering rules`).
## Installation
The installation of this add-on is pretty straightforward and not different in
comparison to installing any other Home Assistant add-on.
1. Add the repository URL to the Home Assistant add-on store
[![Add repository on my Home Assistant][repository-badge]][repository-url]
2. Reload the add-on store page
3. Click the "Install" button to install the add-on.
4. Add your inverter configuration to the add-on configuration
5. Start the "TSUN-Proxy" add-on
6. Check the logs of the "TSUN-Proxy" add-on to see if everything went well.
_Please note, the add-on is pre-configured to connect with
Home Assistants default MQTT Broker. There is no need to configure any MQTT parameters
if you're running an MOSQUITTO add-on. Home Assistant communication and TSUN Cloud URL
and Ports are also pre-configured._
This automatic handling of the TSUN Cloud and MQTT Broker conflicts with the
[TSUN Proxy official documentation][tsunproxy]. The official documentation
will state `mqtt.host`, `mqtt.port`, `mqtt.user`, `mqtt.passwd` `solarman.host`,
`solarman.port` `tsun.host`, `tsun.port` and Home Assistant options are required.
For the add-on, however, this isn't needed.
## Configuration
**Note**: _Remember to restart the add-on when the configuration is changed._
Example add-on configuration after installation:
```yaml
inverters:
- serial: R17E760702080400
node_id: PV-Garage
suggested_area: Garage
modbus_polling: false
pv1.manufacturer: Shinefar
pv1.type: SF-M18/144550
pv2.manufacturer: Shinefar
pv2.type: SF-M18/144550
```
**Note**: _This is just an example, you need to replace the values with your own!_
Example add-on configuration for GEN3PLUS inverters:
```yaml
inverters:
- serial: Y17000000000000
monitor_sn: '2000000000'
node_id: PV-Garage
suggested_area: Garage
modbus_polling: true
client_mode.host: 192.168.x.x
client_mode.port: 8899
client_mode.forward: true
pv1.manufacturer: Shinefar
pv1.type: SF-M18/144550
pv2.manufacturer: Shinefar
pv2.type: SF-M18/144550
pv3.manufacturer: Shinefar
pv3.type: SF-M18/144550
pv4.manufacturer: Shinefar
pv4.type: SF-M18/144550
```
**Note**: _This is just an example, you need to replace the values with your own!_
more information about the configuration can be found in the [configuration details page][configdetails].
## MQTT settings
By default, this add-on requires no `mqtt` config from the user. **This is not an error!**
However, you are free to set them if you want to override, however, in
general usage, that should not be needed and is not recommended for this add-on.
## Changelog & Releases
This repository keeps a change log using [GitHub's releases][releases]
functionality.
Releases are based on [Semantic Versioning][semver], and use the format
of `MAJOR.MINOR.PATCH`. In a nutshell, the version will be incremented
based on the following:
- `MAJOR`: Incompatible or major changes.
- `MINOR`: Backwards-compatible new features and enhancements.
- `PATCH`: Backwards-compatible bugfixes and package updates.
## Support
Got questions?
You have several options to get them answered:
- The Discussions section on [GitHub][discussions].
- The [Home Assistant Discord chat server][discord-ha] for general Home
Assistant discussions and questions.
You could also [open an issue here][issue] GitHub.
## Authors & contributors
The original setup of this repository is by [Stefan Allius][author].
We're very happy to receive contributions to this project! You can get started by reading [CONTRIBUTING.md][contribute].
## License
This project is licensed under the [BSD 3-clause License][bsd].
Note the aiomqtt library used is based on the paho-mqtt library, which has a dual license.
One of the licenses is the so-called [Eclipse Distribution License v1.0.][eclipse]
It is almost word-for-word identical to the BSD 3-clause License. The only differences are:
- One use of "COPYRIGHT OWNER" (EDL) instead of "COPYRIGHT HOLDER" (BSD)
- One use of "Eclipse Foundation, Inc." (EDL) instead of "copyright holder" (BSD)
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
[tsunproxy]: https://github.com/s-allius/tsun-gen3-proxy
[discussions]: https://github.com/s-allius/tsun-gen3-proxy/discussions
[author]: https://github.com/s-allius
[discord-ha]: https://discord.gg/c5DvZ4e
[issue]: https://github.com/s-allius/tsun-gen3-proxy/issues
[releases]: https://github.com/s-allius/tsun-gen3-proxy/releases
[contribute]: https://github.com/s-allius/tsun-gen3-proxy/blob/main/CONTRIBUTING.md
[semver]: http://semver.org/spec/v2.0.0.htm
[bsd]: https://opensource.org/licenses/BSD-3-Clause
[eclipse]: https://www.eclipse.org/org/documents/edl-v10.php
[Mosquitto]: https://github.com/home-assistant/addons/blob/master/mosquitto/DOCS.md
[AdGuard]: https://github.com/hassio-addons/addon-adguard-home
[repository-badge]: https://img.shields.io/badge/Add%20repository%20to%20my-Home%20Assistant-41BDF5?logo=home-assistant&style=for-the-badge
[repository-url]: https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Fs-allius%2Fha-addons
[configdetails]: https://github.com/s-allius/tsun-gen3-proxy/wiki/Configuration-details

View File

@@ -10,73 +10,76 @@
######################
# 1 Build Image #
# 1 Build Base Image #
######################
ARG BUILD_FROM="ghcr.io/hassio-addons/base:stable"
FROM $BUILD_FROM
#######################
# 2 Modify Image #
#######################
#######################
# 3 Install apps #
#######################
ARG BUILD_FROM="ghcr.io/hassio-addons/base:17.0.1"
# hadolint ignore=DL3006
FROM $BUILD_FROM AS base
# Installiere Python, pip und virtuelle Umgebungstools
RUN apk add --no-cache python3 py3-pip py3-virtualenv
RUN apk add --no-cache python3=3.12.8-r1 py3-pip=24.3.1-r0 && \
python -m venv /opt/venv && \
. /opt/venv/bin/activate
# Erstelle ein virtuelles Umfeld und aktiviere es
RUN python3 -m venv /opt/venv
RUN . /opt/venv/bin/activate
# Stelle sicher, dass das Add-on das virtuelle Umfeld nutzt
ENV PATH="/opt/venv/bin:$PATH"
#######################
# 2 Build wheel #
#######################
FROM base AS builder
COPY rootfs/requirements.txt /root/
RUN apk add --no-cache build-base=0.5-r3 && \
python -m pip install --no-cache-dir wheel==0.45.1 && \
python -OO -m pip wheel --no-cache-dir --wheel-dir=/root/wheels -r /root/requirements.txt
#######################
# 3 Build runtime #
#######################
FROM base AS runtime
ARG SERVICE_NAME
ARG VERSION
ENV SERVICE_NAME=${SERVICE_NAME}
#######################
# 4 Install libraries #
#######################
# install the requirements from the wheels packages from the builder stage
# and unistall python packages and alpine package manger to reduce attack surface
# Kopiere die requirements.txt Datei in das Image
COPY rootfs/requirements.txt /tmp/requirements.txt
# installiere die Pakete aus requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
COPY --from=builder /root/wheels /root/wheels
RUN python -m pip install --no-cache-dir --no-cache --no-index /root/wheels/* && \
rm -rf /root/wheels && \
python -m pip uninstall --yes wheel pip && \
apk --purge del apk-tools
#######################
# 5 copy data #
#######################
# Add rootfs
COPY rootfs/ /
# make run.sh executable
RUN chmod a+x /run.sh
#######################
# 6 run app #
#######################
ARG SERVICE_NAME
ARG VERSION
ENV SERVICE_NAME=${SERVICE_NAME}
RUN echo ${VERSION} > /proxy-version.txt
# make run.sh executable
RUN chmod a+x /run.sh && \
echo ${VERSION} > /proxy-version.txt
# command to run on container start
CMD [ "/run.sh" ]

View File

@@ -8,7 +8,7 @@ MQTT_PORT=$(bashio::services mqtt "port")
MQTT_USER=$(bashio::services mqtt "username")
MQTT_PASSWORD=$(bashio::services mqtt "password")
# wenn host gefunden wurde, dann nachricht ausgeben
# if a MQTT was/not found, drop a note
if [ -z "$MQTT_HOST" ]; then
echo "MQTT not found"
else
@@ -21,15 +21,13 @@ fi
cd /home || exit
# Erstelle Ordner für log und config
mkdir -p proxy/log
mkdir -p proxy/config
# Create folder for log und config files
mkdir -p /homeassistant/tsun-proxy/logs
cd /home/proxy || exit
export VERSION=$(cat /proxy-version.txt)
echo "Start Proxyserver..."
python3 server.py --json_config=/data/options.json
python3 server.py --json_config=/data/options.json --log_path=/homeassistant/tsun-proxy/logs/ --config_path=/homeassistant/tsun-proxy/ --log_backups=2

View File

@@ -0,0 +1,95 @@
---
configuration:
inverters:
name: Wechselrichter
description: >+
Für jeden Wechselrichter muss die Seriennummer des Wechselrichters einer MQTT
Definition zugeordnet werden. Dazu wird der entsprechende Konfigurationsblock mit der
16-stellige Seriennummer gestartet, so dass alle nachfolgenden Parameter diesem
Wechselrichter zugeordnet sind.
Weitere wechselrichterspezifische Parameter (z.B. Polling Mode) können im
Konfigurationsblock gesetzt werden.
Die Seriennummer der GEN3 Wechselrichter beginnen mit `R17` und die der GEN3PLUS
Wechselrichter mir `Y17`oder `47`!
Siehe Beispielkonfiguration im Dokumentations-Tab
tsun.enabled:
name: Verbindung zur TSUN Cloud - nur für GEN3-Wechselrichter
description: >+
Schaltet die Verbindung zur TSUN Cloud ein/aus.
Diese Verbindung ist erforderlich, wenn Sie Daten an die TSUN Cloud senden möchten,
z.B. um die TSUN-Apps zu nutzen oder Firmware-Updates zu erhalten.
ein => normaler Proxy-Betrieb.
aus => Der Wechselrichter wird vom Internet isoliert.
solarman.enabled:
name: Verbindung zur Solarman Cloud - nur für GEN3PLUS Wechselrichter
description: >+
Schaltet die Verbindung zur Solarman Cloud ein/aus.
Diese Verbindung ist erforderlich, wenn Sie Daten an die Solarman Cloud senden möchten,
z.B. um die Solarman Apps zu nutzen oder Firmware-Updates zu erhalten.
ein => normaler Proxy-Betrieb.
aus => Der Wechselrichter wird vom Internet isoliert.
inverters.allow_all:
name: Erlaube Verbindungen von sämtlichen Wechselrichtern
description: >-
Der Proxy akzeptiert normalerweise nur Verbindungen von konfigurierten Wechselrichtern.
Schalten Sie dies für Testzwecke und unbekannte Seriennummern ein.
mqtt.host:
name: MQTT Broker Host
description: >-
Hostname oder IP-Adresse des MQTT-Brokers. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zum Home Assistant MQTT-Broker herzustellen.
mqtt.port:
name: MQTT Broker Port
description: >-
Port des MQTT-Brokers. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zum Home Assistant MQTT-Broker herzustellen.
mqtt.user:
name: MQTT Broker Benutzer
description: >-
Benutzer für den MQTT-Broker. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zum Home Assistant MQTT-Broker herzustellen.
mqtt.passwd:
name: MQTT Broker Passwort
description: >-
Passwort für den MQTT-Broker. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zum Home Assistant MQTT-Broker herzustellen.
ha.auto_conf_prefix:
name: MQTT-Präfix für das Abonnieren von Home Assistant-Statusaktualisierungen
ha.discovery_prefix:
name: MQTT-Präfix für das discovery topic
ha.entity_prefix:
name: MQTT-Themenpräfix für die Veröffentlichung von Wechselrichterwerten
ha.proxy_node_id:
name: MQTT-Knoten-ID für die proxy_node_id
ha.proxy_unique_id:
name: MQTT-eindeutige ID zur Identifizierung einer Proxy-Instanz
tsun.host:
name: TSUN Cloud Host
description: >-
Hostname oder IP-Adresse der TSUN-Cloud. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zur Cloud logger.talent-monitoring.com herzustellen.
solarman.host:
name: Solarman Cloud Host
description: >-
Hostname oder IP-Adresse der Solarman-Cloud. Wenn nicht gesetzt, versucht das Addon, eine Verbindung zur Cloud iot.talent-monitoring.com herzustellen.
gen3plus.at_acl.tsun.allow:
name: TSUN GEN3PLUS ACL allow
description: >-
Liste erlaubter AT-Befehle für TSUN GEN3PLUS
gen3plus.at_acl.tsun.block:
name: TSUN GEN3 ACL block
description: >-
Liste blockierter AT-Befehle für TSUN GEN3PLUS
gen3plus.at_acl.mqtt.allow:
name: MQTT GEN3PLUS ACL allow
description: >-
Liste erlaubter MQTT-Befehle für GEN3PLUS
gen3plus.at_acl.mqtt.block:
name: MQTT GEN3PLUS ACL block
description: >-
Liste blockierter MQTT-Befehle für GEN3PLUS
network:
5005/tcp: listening Port für TSUN GEN3 Wechselrichter
10000/tcp: listening Port für TSUN GEN3PLUS Wechselrichter

View File

@@ -5,41 +5,37 @@ configuration:
description: >+
For each GEN3 inverter, the serial number of the inverter must be mapped to an MQTT
definition. To do this, the corresponding configuration block is started with
<16-digit serial number> so that all subsequent parameters are assigned
16-digit serial number so that all subsequent parameters are assigned
to this inverter. Further inverter-specific parameters (e.g. polling mode) can be set
in the configuration block
The serial numbers of all GEN3 inverters start with `R17`!
monitor_sn # The GEN3PLUS "Monitoring SN:"
node_id # MQTT replacement for inverters serial number
suggested_area # suggested installation area for home-assistant
modbus_polling # Disable optional MODBUS polling
pv1 # Optional, PV module descr
pv2 # Optional, PV module descr
The serial numbers of all GEN3 inverters start with `R17` and that of the GEN3PLUS
inverters with Y17 or 47!
For reference see example configuration in Documentation Tab
tsun.enabled:
name: Connection to TSUN Cloud - for GEN3 inverter only
description: >-
switch on/off connection to the TSUN cloud
description: >+
switch on/off connection to the TSUN cloud.
This connection is only required if you want send data to the TSUN cloud
eg. to use the TSUN APPs or receive firmware updates.
on - normal proxy operation
off - The Inverter become isolated from Internet
on => normal proxy operation.
off => The Inverter become isolated from Internet.
solarman.enabled:
name: Connection to Solarman Cloud - for GEN3PLUS inverter only
description: >-
switch on/off connection to the Solarman cloud
description: >+
switch on/off connection to the Solarman cloud.
This connection is only required if you want send data to the Solarman cloud
eg. to use the Solarman APPs or receive firmware updates.
on - normal proxy operation
off - The Inverter become isolated from Internet
on => normal proxy operation.
off => The Inverter become isolated from Internet
inverters.allow_all:
name: Allow all connections from all inverters
description: >-
The proxy only usually accepts connections from known inverters.
The proxy only usually accepts connections from configured inverters.
Switch on for test purposes and unknown serial numbers.
mqtt.host:
name: MQTT Broker Host
@@ -70,16 +66,30 @@ configuration:
tsun.host:
name: TSUN Cloud Host
description: >-
Hostname or IP address of the TSUN cloud. if not set, the addon will try to connect to the cloud default
Hostname or IP address of the TSUN cloud. if not set, the addon will try to connect to the cloud
on logger.talent-monitoring.com
solarman.host:
name: Solarman Cloud Host
description: >-
Hostname or IP address of the Solarman cloud. if not set, the addon will try to connect to the cloud default
Hostname or IP address of the Solarman cloud. if not set, the addon will try to connect to the cloud
on iot.talent-monitoring.com
gen3plus.at_acl.tsun.allow:
name: TSUN GEN3PLUS ACL allow
description: >-
List of allowed TSUN GEN3PLUS AT commands
gen3plus.at_acl.tsun.block:
name: TSUN GEN3 ACL block
description: >-
List of blocked TSUN GEN3PLUS AT commands
gen3plus.at_acl.mqtt.allow:
name: MQTT GEN3PLUS ACL allow
description: >-
List of allowed MQTT GEN3PLUS commands
gen3plus.at_acl.mqtt.block:
name: MQTT GEN3PLUS ACL block
description: >-
List of blocked MQTT GEN3PLUS commands
network:
8127/tcp: x...
5005/tcp: listening Port for TSUN GEN3 Devices
10000/tcp: listening Port for TSUN GEN3PLUS Devices

View File

@@ -1,25 +1,32 @@
name: "TSUN-Proxy"
description: "MQTT Proxy for TSUN Photovoltaic Inverters"
version: "dev"
name: {{name}}
description: {{description}}
version: {% if version is defined and version|length %} {{version}} {% else %} {{AppVersion}} {% endif %}
image: docker.io/sallius/tsun-gen3-addon
url: https://github.com/s-allius/tsun-gen3-proxy
slug: "tsun-proxy"
slug: {{slug}}
advanced: {{advanced}}
stage: {{stage}}
init: false
arch:
- aarch64
- amd64
- armhf
- armv7
- i386
startup: services
homeassistant_api: true
map:
- type: addon_config
path: /homeassistant/tsun-proxy
read_only: False
services:
- mqtt:want
ports:
8127/tcp: 8127
5005/tcp: 5005
10000/tcp: 10000
# FIXME: we disabled the watchdog due to exceptions in the ha supervisor. See: https://github.com/s-allius/tsun-gen3-proxy/issues/249
# watchdog: "http://[HOST]:[PORT:8127]/-/healthy"
# Definition of parameters in the configuration tab of the addon
# parameters are available within the container as /data/options.json
# and should become picked up by the proxy - current workaround as a transfer script
@@ -32,8 +39,9 @@ schema:
node_id: str
suggested_area: str
modbus_polling: bool
client_mode_host: str?
client_mode_port: int?
client_mode.host: str?
client_mode.port: int?
client_mode.forward: bool?
#strings: # leider funktioniert es nicht die folgenden 3 parameter im schema aufzulisten. möglicherweise wird die verschachtelung nicht unterstützt.
# - string: str
# type: str

View File

@@ -0,0 +1,9 @@
{
"name": "TSUN-Proxy (Debug)",
"description": "MQTT Proxy for TSUN Photovoltaic Inverters with Debug Logging",
"version": "debug",
"slug": "tsun-proxy-debug",
"advanced": true,
"stage": "experimental"
}

View File

@@ -0,0 +1,9 @@
{
"name": "TSUN-Proxy (Dev)",
"description": "MQTT Proxy for TSUN Photovoltaic Inverters",
"version": "dev",
"slug": "tsun-proxy-dev",
"advanced": false,
"stage": "experimental"
}

View File

@@ -0,0 +1,8 @@
{
"name": "TSUN-Proxy",
"description": "MQTT Proxy for TSUN Photovoltaic Inverters",
"slug": "tsun-proxy",
"advanced": false,
"stage": "stable"
}

View File

@@ -1,3 +0,0 @@
name: TSUN-Proxy
url: https://github.com/s-allius/tsun-gen3-proxy/ha_addons
maintainer: Stefan Allius

View File

@@ -5,6 +5,10 @@
},
{
"path": "../wiki"
},
{
"name": "ha-addons",
"path": "../ha-addons/ha-addons"
}
],
"settings": {}