Initial commit

This commit is contained in:
Stefan Allius
2023-09-24 21:54:37 +02:00
commit 52d8eba52a
19 changed files with 1337 additions and 0 deletions

6
.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
__pycache__
.pytest_cache
mosquitto/**
homeassistant/**
tsun_proxy/**
Doku/**

16
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,16 @@
{
// Verwendet IntelliSense zum Ermitteln möglicher Attribute.
// Zeigen Sie auf vorhandene Attribute, um die zugehörigen Beschreibungen anzuzeigen.
// Weitere Informationen finden Sie unter https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Aktuelle Datei",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}

7
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"app","system_tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}

28
CHANGELOG.md Normal file
View File

@@ -0,0 +1,28 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Removed
-
### Added
- Logger for inverter packets
- SIGTERM handler for fast docker restarts
- Proxy as non-root docker application
- Unit- and system tests
- Home asssistant auto configuration
- Self-sufficient island operation without internet
## [0.0.0] - 2023-08-21
### Added
- First checkin, the project was born

11
LICENSE.md Normal file
View File

@@ -0,0 +1,11 @@
Copyright (c) 2023 Stefan Allius.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

115
README.md Normal file
View File

@@ -0,0 +1,115 @@
<h1 align="center">TSUN-Gen3-Proxy</h1>
<p align="center">A proxy for</p>
<h3 align="center">TSUN Gen 3 Micro-Inverters</h3>
<p align="center">for easy</p>
<h3 align="center">MQTT/Home-Assistant</h3>
<p align="center">integration</p>
<p align="center">
<a href="https://opensource.org/licenses/BSD-3-Clause"><img alt="License: BSD-3-Clause" src="https://img.shields.io/badge/License-BSD_3--Clause-green.svg"></a>
<a href="https://www.python.org/downloads/release/python-3110/"><img alt="Supported Python versions" src="https://img.shields.io/badge/python-3.11-blue.svg"></a>
<a href="https://sbtinstruments.github.io/aiomqtt/introduction.html"><img alt="Supported Python versions" src="https://img.shields.io/badge/aiomqtt-1.2.0-lightblue.svg"></a>
<a href="https://toml.io/en/v1.0.0"><img alt="Supported Python versions" src="https://img.shields.io/badge/toml-1.0.0-lightblue.svg"></a>
</p>
###
# Overview
The "TSUN Gen3 Micro-Inverter" proxy enables a reliable connection between TSUN third generation inverters and an MQTT broker to integrate the inverter into typical home automations.
The inverter establishes a TCP connection to the TSUN Cloud to transmit current measured values every 300 seconds. To be able to forward the measurement data to an MQTT broker, the proxy must be looped into this TCP connection.
Through this, the inverter then establishes a connection to the proxy and the proxy establishes another connection to the TSUN Cloud. The transmitted data is interpreted by the proxy and then passed on to both the TSUN Cloud and the MQTT broker. The connection to the TSUN Cloud is optional and can be switched off in the configuration (default is on). Then no more data is sent to the Internet, but no more remote updates of firmware and operating parameters (e.g. rated power, grid parameters) are possible.
By means of `docker` a simple installation and operation is possible. By using `docker-composer`, a complete stack of proxy, `MQTT-brocker` and `home-assistant` can be started easily.
```
❗An essential requirement is that the proxy can be looped into the connection between the inverter and TSUN Cloud.
There are various ways to do this, for example via DNS host entry or via firewall rules (iptables) in your router. However, depending on the circumstances, not all of them are possible.
If you use a PiHole, you can also store the host entry in the PiHole.
```
## Features
- `MQTT` support
- `Home-Assistant` auto-discovery support
- Self-sufficient island operation without internet
- non-root Docker Container
## Requirements
- A running Docker engine to host the container
- Ability to loop the proxy into the connection between the inverter and the TSUN cloud
## License
This project is licensed under the [BSD 3-clause License](https://opensource.org/licenses/BSD-3-Clause).
Note the used aiomqtt library whichthat the underlying paho-mqtt library is dual-licensed. One of the licenses is the so-called [Eclipse Distribution License v1.0](https://www.eclipse.org/org/documents/edl-v10.php). It is almost word-for-word identical to the BSD 3-clause License. The only differences are:
- One use of "COPYRIGHT OWNER" (EDL) instead of "COPYRIGHT HOLDER" (BSD)
- One use of "Eclipse Foundation, Inc." (EDL) instead of "copyright holder" (BSD)
## Versioning
This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). Breaking changes will only occur in major `X.0.0` releases.
## Changelog
The changelog lives in [CHANGELOG.md](https://github.com/s-allius/tsun-gen3-proxy/blob/main/CHANGELOG.md). It follows the principles of [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
###
# Configuration
The Docker container does not require any special configuration.
On the host, two directories (for log files and for config files) must be mapped. If necessary, the UID of the proxy process can be adjusted, which is also the owner of the log and configuration files.
The proxy can be configured via the file 'config.toml'. When the proxy is started, a file 'config.example.toml' is copied into the config directory. This file shows all possible parameters and their default values. Changes in the example file itself are not evaluated. To configure the proxy, the config.example.toml file should be renamed to config.toml. After that the corresponding values can be adjusted. To load the new configuration, the proxy must be restarted.
## Proxy Configuration
The configration uses the TOML format, which aims to be easy to read due to obvious semantics.
You find more details here: https://toml.io/en/v1.0.0
```toml
# configuration to reach tsun cloud
tsun.enabled = true # false: disables connecting to the tsun cloud, and avoids updates
tsun.host = 'logger.talent-monitoring.com'
tsun.port = 5005
# mqtt broker configuration
mqtt.host = 'mqtt' # URL or IP address of the mqtt broker
mqtt.port = 1883
mqtt.user = ''
mqtt.passwd = ''
# home-assistant
ha.auto_conf_prefix = 'homeassistant' # MQTT prefix for subscribing for homeassistant status updates
ha.discovery_prefix = 'homeassistant' # MQTT prefix for discovery topic
ha.entity_prefix = 'tsun' # MQTT topic prefix for publishing inverter values
# microinverters
inverters.allow_all = false # True: allow inverters, even if we have no inverter mapping
# inverter mapping, maps a `serial_no* to a `node_id` and defines an optional `suggested_area` for `home-assistant`
#
# for each inverter add a block starting with [inverters."<16-digit serial numbeer>"]
[inverters."R17xxxxxxxxxxxx1"]
node_id = 'inv1' # Optional, MQTT replacement for inverters serial number
suggested_area = 'roof' # Optional, suggested installation area for home-assistant
[inverters."R17xxxxxxxxxxxx2"]
node_id = 'inv2' # Optional, MQTT replacement for inverters serial number
suggested_area = 'balcony' # Optional, suggested installation area for home-assistant
```

3
app/.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
tests/
**/__pycache__
*.pyc

52
app/Dockerfile Normal file
View File

@@ -0,0 +1,52 @@
ARG SERVICE_NAME="tsun-proxy"
ARG UID=1026
# set base image (host OS)
FROM python:3.11-slim-bookworm AS builder
RUN pip install --upgrade pip
# copy the dependencies file to the working directory
COPY ./requirements.txt .
# install dependencies
RUN pip install --user -r requirements.txt
#
# second unnamed stage
FROM python:3.11-slim-bookworm
ARG SERVICE_NAME
ARG UID
ENV SERVICE_NAME=$SERVICE_NAME
ENV UID=$UID
RUN addgroup --gid 1000 $SERVICE_NAME && \
adduser --ingroup $SERVICE_NAME --shell /bin/false --disabled-password --uid $UID $SERVICE_NAME && \
mkdir -p /home/$SERVICE_NAME/log && \
chown $SERVICE_NAME:$SERVICE_NAME /home/$SERVICE_NAME/log && \
mkdir -p /home/$SERVICE_NAME/config && \
chown $SERVICE_NAME:$SERVICE_NAME /home/$SERVICE_NAME/config
# set the working directory in the container
WORKDIR /home/$SERVICE_NAME
USER $SERVICE_NAME
# copy only the dependencies installation from the 1st stage image
COPY --from=builder --chown=$SERVICE_NAME:$SERVICE_NAME /root/.local /home/$SERVICE_NAME/.local
# copy the content of the local src and config directory to the working directory
COPY --chown=$SERVICE_NAME:$SERVICE_NAME config .
COPY --chown=$SERVICE_NAME:$SERVICE_NAME src .
# update PATH environment variable
ENV HOME=/home/$SERVICE_NAME
ENV PATH=/home/$SERVICE_NAME/.local:$PATH
EXPOSE 5005 5005
LABEL de.allius.image.authors="Stefan Allius <stefan.allius@t-online.de>"
# command to run on container start
CMD [ "python3", "./server.py" ]

View File

@@ -0,0 +1,34 @@
# configuration to reach tsun cloud
tsun.enabled = true # false: disables connecting to the tsun cloud, and avoids updates
tsun.host = 'logger.talent-monitoring.com'
tsun.port = 5005
# mqtt broker configuration
mqtt.host = 'mqtt' # URL or IP address of the mqtt broker
mqtt.port = 1883
mqtt.user = ''
mqtt.passwd = ''
# home-assistant
ha.auto_conf_prefix = 'homeassistant' # MQTT prefix for subscribing for homeassistant status updates
ha.discovery_prefix = 'homeassistant' # MQTT prefix for discovery topic
ha.entity_prefix = 'tsun' # MQTT topic prefix for publishing inverter values
# microinverters
inverters.allow_all = true # allow inverters, even if we have no inverter mapping
# inverter mapping, maps a `serial_no* to a `mqtt_id` and defines an optional `suggested_place` for `home-assistant`
#
# for each inverter add a block starting with [inverters."<16-digit serial numbeer>"]
#[inverters."R17xxxxxxxxxxxx1"]
#node_id = '' # Optional, MQTT replacement for inverters serial number
#suggested_area = '' # Optional, suggested installation area for home-assistant
#[inverters."R17xxxxxxxxxxxx2"]
#node_id = '' # Optional, MQTT replacement for inverters serial number
#suggested_area = '' # Optional, suggested installation area for home-assistant

2
app/requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
aiomqtt==1.2.0
schema

145
app/src/async_stream.py Normal file
View File

@@ -0,0 +1,145 @@
import logging, traceback, aiomqtt, json
from config import Config
from messages import Message, hex_dump_memory
from mqtt import Mqtt
logger = logging.getLogger('conn')
logger_mqtt = logging.getLogger('mqtt')
class AsyncStream(Message):
def __init__(self, proxy, reader, writer, addr, stream=None, server_side=True):
super().__init__()
self.proxy = proxy
self.reader = reader
self.writer = writer
self.remoteStream = stream
self.addr = addr
self.server_side = server_side
self.mqtt = Mqtt()
self.unique_id = 0
self.node_id = ''
'''
Our puplic methods
'''
async def set_serial_no(self, serial_no : str):
logger_mqtt.info(f'SerialNo: {serial_no}')
if self.unique_id != serial_no:
inverters = Config.get('inverters')
#logger_mqtt.debug(f'Inverters: {inverters}')
if serial_no in inverters:
logger_mqtt.debug(f'SerialNo {serial_no} allowed!')
inv = inverters[serial_no]
self.node_id = inv['node_id']
sug_area = inv['suggested_area']
else:
logger_mqtt.debug(f'SerialNo {serial_no} not known!')
self.node_id = ''
sug_area = ''
if not inverters['allow_all']:
self.unique_id = None
logger_mqtt.error('ignore message from unknow inverter!')
return
self.unique_id = serial_no
ha = Config.get('ha')
self.entitiy_prfx = ha['entity_prefix'] + '/'
discovery_prfx = ha['discovery_prefix'] + '/'
if self.server_side:
try:
for data_json, id in self.db.ha_confs(self.entitiy_prfx + self.node_id, self.unique_id, sug_area):
logger_mqtt.debug(f'Register: {data_json}')
await self.mqtt.publish(f"{discovery_prfx}sensor/{self.node_id}{id}/config", data_json)
except Exception:
logging.error(
f"Proxy: Exception:\n"
f"{traceback.format_exc()}")
async def loop(self) -> None:
while True:
try:
await self.__async_read()
if self.id_str:
await self.set_serial_no(self.id_str.decode("utf-8"))
if self.unique_id:
await self.__async_write()
await self.__async_forward()
await self.__async_publ_mqtt()
except (ConnectionResetError,
ConnectionAbortedError,
RuntimeError) as error:
logger.error(f'In loop for {self.addr}: {error}')
self.close()
return
except Exception:
logger.error(
f"Exception for {self.addr}:\n"
f"{traceback.format_exc()}")
self.close()
return
def close(self):
logger.info(f'in async_stream.close() {self.addr}')
self.writer.close()
self.proxy = None
self.remoteStream = None
'''
Our private methods
'''
async def __async_read(self) -> None:
data = await self.reader.read(4096)
if data:
self._recv_buffer += data
self.read() # call read in parent class
else:
raise RuntimeError("Peer closed.")
async def __async_write(self) -> None:
if self._send_buffer:
hex_dump_memory(logging.INFO, f'Transmit to {self.addr}:', self._send_buffer, len(self._send_buffer))
self.writer.write(self._send_buffer)
await self.writer.drain()
self._send_buffer = bytearray(0) #self._send_buffer[sent:]
async def __async_forward(self) -> None:
if self._forward_buffer:
if not self.remoteStream:
tsun = Config.get('tsun')
self.remoteStream = await self.proxy.CreateClientStream (self, tsun['host'], tsun['port'])
if self.remoteStream:
hex_dump_memory(logging.DEBUG, f'Forward to {self.remoteStream.addr}:', self._forward_buffer, len(self._forward_buffer))
self.remoteStream.writer.write (self._forward_buffer)
await self.remoteStream.writer.drain()
self._forward_buffer = bytearray(0)
async def __async_publ_mqtt(self) -> None:
if self.server_side:
db = self.db.db
for key in self.new_data:
if self.new_data[key] and key in db:
data_json = json.dumps(db[key])
logger_mqtt.info(f'{key}: {data_json}')
await self.mqtt.publish(f"{self.entitiy_prfx}{self.node_id}{key}", data_json)
self.new_data[key] = False
def __del__ (self):
logger.debug ("AsyncStream __del__")

76
app/src/config.py Normal file
View File

@@ -0,0 +1,76 @@
'''Config module handles the proxy configuration in the config.toml file'''
import shutil, tomllib, logging
from schema import Schema, And, Use, Optional
class Config():
'''Static class Config is reads and sanitize the config.
Read config.toml file and sanitize it with read().
Get named parts of the config with get()'''
config = {}
conf_schema = Schema({ 'tsun': {
'enabled': Use(bool),
'host': Use(str),
'port': And(Use(int), lambda n: 1024 <= n <= 65535)},
'mqtt': {
'host': Use(str),
'port': And(Use(int), lambda n: 1024 <= n <= 65535),
'user': And(Use(str), Use(lambda s: s if len(s) >0 else None)),
'passwd': And(Use(str), Use(lambda s: s if len(s) >0 else None))},
'ha': {
'auto_conf_prefix': Use(str),
'discovery_prefix': Use(str),
'entity_prefix': Use(str)},
'inverters': {
'allow_all' : Use(bool),
And(Use(str), lambda s: len(s) == 16 ): {
Optional('node_id', default=""): And(Use(str),Use(lambda s: s +'/' if len(s)> 0 and s[-1] != '/' else s)),
Optional('suggested_area', default=""): Use(str)
}}
}, ignore_extra_keys=True)
@classmethod
def read(cls) -> None:
'''Read config file, merge it with the default config and sanitize the result'''
config = {}
logger = logging.getLogger('data')
try:
# make the default config transparaent by copying it in the config.example file
shutil.copy2("default_config.toml", "config/config.example.toml")
# read example config file as default configuration
with open("default_config.toml", "rb") as f:
def_config = tomllib.load(f)
# overwrite the default values, with values from the config.toml file
with open("config/config.toml", "rb") as f:
usr_config = tomllib.load(f)
config['tsun'] = def_config['tsun'] | usr_config['tsun']
config['mqtt'] = def_config['mqtt'] | usr_config['mqtt']
config['ha'] = def_config['ha'] | usr_config['ha']
config['inverters'] = def_config['inverters'] | usr_config['inverters']
cls.config = cls.conf_schema.validate(config)
logging.debug(f'Readed config: "{cls.config}" ')
except Exception as error:
logger.error(f'Config.read: {error}')
cls.config = {}
@classmethod
def get(cls, member:str = None):
'''Get a named attribute from the proxy config. If member == None it returns the complete config dict'''
if member:
return cls.config.get(member, {})
else:
return cls.config

186
app/src/infos.py Normal file
View File

@@ -0,0 +1,186 @@
import struct, json, logging
class Infos:
def __init__(self):
self.db = {}
self.tracer = logging.getLogger('data')
__info_defs={
# collector values:
0x00092ba8: {'name':['collector', 'Collector_Fw_Version'], 'level': logging.INFO, 'unit': ''},
0x000927c0: {'name':['collector', 'Chip_Type'], 'level': logging.DEBUG, 'unit': ''},
0x00092f90: {'name':['collector', 'Chip_Model'], 'level': logging.DEBUG, 'unit': ''},
0x00095a88: {'name':['collector', 'Trace_URL'], 'level': logging.DEBUG, 'unit': ''},
0x00095aec: {'name':['collector', 'Logger_URL'], 'level': logging.DEBUG, 'unit': ''},
0x000cf850: {'name':['collector', 'Data_Up_Interval'], 'level': logging.DEBUG, 'unit': 's'},
0x000005dc: {'name':['collector', 'Rated_Power'], 'level': logging.DEBUG, 'unit': 'W'},
# inverter values:
0x0000000a: {'name':['inverter', 'Product_Name'], 'level': logging.DEBUG, 'unit': ''},
0x00000014: {'name':['inverter', 'Manufacturer'], 'level': logging.DEBUG, 'unit': ''},
0x0000001e: {'name':['inverter', 'Version'], 'level': logging.INFO, 'unit': ''},
0x00000028: {'name':['inverter', 'Serial_Number'], 'level': logging.DEBUG, 'unit': ''},
0x00000032: {'name':['inverter', 'Equipment_Model'], 'level': logging.DEBUG, 'unit': ''},
# env:
0x00000514: {'name':['env', 'Inverter_Temp'], 'level': logging.DEBUG, 'unit': '°C'},
0x000c3500: {'name':['env', 'Signal_Strength'], 'level': logging.DEBUG, 'unit': '%'},
# events:
0x00000191: {'name':['events', '401_'], 'level': logging.DEBUG, 'unit': ''},
0x00000192: {'name':['events', '402_'], 'level': logging.DEBUG, 'unit': ''},
0x00000193: {'name':['events', '403_'], 'level': logging.DEBUG, 'unit': ''},
0x00000194: {'name':['events', '404_'], 'level': logging.DEBUG, 'unit': ''},
0x00000195: {'name':['events', '405_'], 'level': logging.DEBUG, 'unit': ''},
0x00000196: {'name':['events', '406_'], 'level': logging.DEBUG, 'unit': ''},
0x00000197: {'name':['events', '407_'], 'level': logging.DEBUG, 'unit': ''},
0x00000198: {'name':['events', '408_'], 'level': logging.DEBUG, 'unit': ''},
0x00000199: {'name':['events', '409_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019a: {'name':['events', '410_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019b: {'name':['events', '411_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019c: {'name':['events', '412_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019d: {'name':['events', '413_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019e: {'name':['events', '414_'], 'level': logging.DEBUG, 'unit': ''},
0x0000019f: {'name':['events', '415_GridFreqOverRating'], 'level': logging.DEBUG, 'unit': ''},
0x000001a0: {'name':['events', '416_'], 'level': logging.DEBUG, 'unit': ''},
# grid measures:
0x000003e8: {'name':['grid', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V'},
0x0000044c: {'name':['grid', 'Current'], 'level': logging.DEBUG, 'unit': 'A'},
0x000004b0: {'name':['grid', 'Frequency'], 'level': logging.DEBUG, 'unit': 'Hz'},
0x00000640: {'name':['grid', 'Output_Power'], 'level': logging.INFO, 'unit': 'W', 'ha':{'dev_cla': 'power', 'stat_cla': 'measurement', 'id':'out_power_', 'fmt':'| float','name': 'Actual Power'}},
# input measures:
0x000006a4: {'name':['input', 'pv1', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V'},
0x00000708: {'name':['input', 'pv1', 'Current'], 'level': logging.DEBUG, 'unit': 'A'},
0x0000076c: {'name':['input', 'pv1', 'Power'], 'level': logging.INFO, 'unit': 'W', 'ha':{'dev_cla': 'power', 'stat_cla': 'measurement', 'id':'power_pv1_','name': 'Power PV1', 'val_tpl' :"{{ (value_json['pv1']['Power'] | float)}}"}},
0x000007d0: {'name':['input', 'pv2', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V'},
0x00000834: {'name':['input', 'pv2', 'Current'], 'level': logging.DEBUG, 'unit': 'A'},
0x00000898: {'name':['input', 'pv2', 'Power'], 'level': logging.INFO, 'unit': 'W', 'ha':{'dev_cla': 'power', 'stat_cla': 'measurement', 'id':'power_pv2_','name': 'Power PV2', 'val_tpl' :"{{ (value_json['pv2']['Power'] | float)}}"}},
0x000008fc: {'name':['input', 'pv3', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V'},
0x00000960: {'name':['input', 'pv3', 'Curent'], 'level': logging.DEBUG, 'unit': 'A'},
0x000009c4: {'name':['input', 'pv3', 'Power'], 'level': logging.DEBUG, 'unit': 'W', 'ha':{'dev_cla': 'power', 'stat_cla': 'measurement', 'id':'power_pv3_','name': 'Power PV3', 'val_tpl' :"{{ (value_json['pv3']['Power'] | float)}}"}},
0x00000a28: {'name':['input', 'pv4', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V'},
0x00000a8c: {'name':['input', 'pv4', 'Current'], 'level': logging.DEBUG, 'unit': 'A'},
0x00000af0: {'name':['input', 'pv4', 'Power'], 'level': logging.DEBUG, 'unit': 'W', 'ha':{'dev_cla': 'power', 'stat_cla': 'measurement', 'id':'power_pv4_','name': 'Power PV4', 'val_tpl' :"{{ (value_json['pv4']['Power'] | float)}}"}},
0x00000c1c: {'name':['input', 'pv1', 'Daily_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id':'daily_gen_pv1_','name': 'Daily Generation PV1', 'val_tpl' :"{{ (value_json['pv1']['Daily_Generation'] | float)}}"}},
0x00000c80: {'name':['input', 'pv1', 'Total_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total', 'id':'total_gen_pv1_','name': 'Total Generation PV1', 'val_tpl' :"{{ (value_json['pv1']['Total_Generation'] | float)}}"}},
0x00000ce4: {'name':['input', 'pv2', 'Daily_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id':'daily_gen_pv2_','name': 'Daily Generation PV2', 'val_tpl' :"{{ (value_json['pv2']['Daily_Generation'] | float)}}"}},
0x00000d48: {'name':['input', 'pv2', 'Total_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total', 'id':'total_gen_pv2_','name': 'Total Generation PV2', 'val_tpl' :"{{ (value_json['pv2']['Total_Generation'] | float)}}"}},
0x00000dac: {'name':['input', 'pv3', 'Daily_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id':'daily_gen_pv3_','name': 'Daily Generation PV3', 'val_tpl' :"{{ (value_json['pv3']['Daily_Generation'] | float)}}"}},
0x00000e10: {'name':['input', 'pv3', 'Total_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total', 'id':'total_gen_pv3_','name': 'Total Generation PV3', 'val_tpl' :"{{ (value_json['pv3']['Total_Generation'] | float)}}"}},
0x00000e74: {'name':['input', 'pv4', 'Daily_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id':'daily_gen_pv4_','name': 'Daily Generation PV4', 'val_tpl' :"{{ (value_json['pv4']['Daily_Generation'] | float)}}"}},
0x00000ed8: {'name':['input', 'pv4', 'Total_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total', 'id':'total_gen_pv4_','name': 'Total Generation PV4', 'val_tpl' :"{{ (value_json['pv4']['Total_Generation'] | float)}}"}},
# total:
0x00000b54: {'name':['total', 'Daily_Generation'], 'level': logging.INFO, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id':'daily_gen_', 'fmt':'| float','name': 'Daily Generation'}},
0x00000bb8: {'name':['total', 'Total_Generation'], 'level': logging.INFO, 'unit': 'kWh', 'ha':{'dev_cla': 'energy', 'stat_cla': 'total', 'id':'total_gen_', 'fmt':'| float','name': 'Total Generation', 'icon':'mdi:solar-power'}},
0x000c96a8: {'name':['total', 'Power_On_Time'], 'level': logging.DEBUG, 'unit': 's', 'ha':{'dev_cla': 'duration', 'stat_cla': 'measurement', 'id':'power_on_time_', 'name': 'Power on Time', 'val_tpl':"{{ (value_json['Power_On_Time'] | float)}}", 'nat_prc':'3'}},
}
def ha_confs(self, prfx="tsun/garagendach/", snr='123', sug_area =''):
tab = self.__info_defs
for key in tab:
row = tab[key]
if 'ha' in row:
ha = row['ha']
attr = {}
if 'name' in ha:
attr['name'] = ha['name'] # eg. 'name': "Actual Power"
else:
attr['name'] = row['name'][-1] # eg. 'name': "Actual Power"
attr['stat_t'] = prfx +row['name'][0] # eg. 'stat_t': "tsun/garagendach/grid"
attr['dev_cla'] = ha['dev_cla'] # eg. 'dev_cla': 'power'
attr['stat_cla'] = ha['stat_cla'] # eg. 'stat_cla': "measurement"
attr['uniq_id'] = ha['id']+snr # eg. 'uniq_id':'out_power_123'
if 'val_tpl' in ha:
attr['val_tpl'] = ha['val_tpl'] # eg. 'val_tpl': "{{ value_json['Output_Power']|float }}"
elif 'fmt' in ha:
attr['val_tpl'] = '{{value_json' + f"['{row['name'][-1]}'] {ha['fmt']}" + '}}' # eg. 'val_tpl': "{{ value_json['Output_Power']|float }}"
if 'unit' in row:
attr['unit_of_meas'] = row['unit'] # eg. 'unit_of_meas': 'W'
if 'icon' in ha:
attr['icon'] = ha['icon'] # eg. 'icon':'mdi:solar-power'
if 'nat_prc' in ha:
attr['suggested_display_precision'] = ha['nat_prc']
# eg. 'dev':{'name':'Microinverter','mdl':'MS-600','ids':["inverter_123"],'mf':'TSUN','sa': 'auf Garagendach'}
# attr['dev'] = {'name':'Microinverter','mdl':'MS-600','ids':[f'inverter_{snr}'],'mf':'TSUN','sa': 'auf Garagendach'}
dev = {}
dev['name'] = 'Microinverter' #fixme
dev['mdl'] = 'MS-600' #fixme
dev['ids'] = [f'inverter_{snr}']
dev['mf'] = 'TSUN' #fixme
dev['sa'] = sug_area
dev['sw'] = '0.01' #fixme
dev['hw'] = 'Hw0.01' #fixme
#dev['via_device'] = #fixme
attr['dev'] = dev
yield json.dumps (attr), attr['uniq_id']
def __key_obj(self, id) -> list:
d = self.__info_defs.get(id, {'name': None, 'level': logging.DEBUG, 'unit': ''})
return d['name'], d['level'], d['unit']
def parse(self, buf):
result = struct.unpack_from('!l', buf, 0)
elms = result[0]
i = 0
ind = 4
while i < elms:
result = struct.unpack_from('!lB', buf, ind)
info_id = result[0]
data_type = result[1]
ind += 5
keys, level, unit = self.__key_obj(info_id)
if data_type==0x54: # 'T' -> Pascal-String
str_len = buf[ind]
result = struct.unpack_from(f'!{str_len+1}p', buf, ind)[0].decode(encoding='ascii', errors='replace')
ind += str_len+1
elif data_type==0x49: # 'I' -> int32
result = struct.unpack_from(f'!l', buf, ind)[0]
ind += 4
elif data_type==0x53: # 'S' -> short
result = struct.unpack_from(f'!h', buf, ind)[0]
ind += 2
elif data_type==0x46: # 'F' -> float32
result = round(struct.unpack_from(f'!f', buf, ind)[0],2)
ind += 4
if keys:
dict = self.db
name = ''
for key in keys[:-1]:
if key not in dict:
dict[key] = {}
dict = dict[key]
name += key + '.'
update = keys[-1] not in dict or dict[keys[-1]] != result
dict[keys[-1]] = result
name += keys[-1]
yield keys[0], update
else:
update = False
name = str(f'info-id.0x{info_id:x}')
self.tracer.log(level, f'{name} : {result}{unit}')
i +=1

68
app/src/logging.ini Normal file
View File

@@ -0,0 +1,68 @@
[loggers]
keys=root,tracer,mesg,conn,data,mqtt
[handlers]
keys=console_handler,file_handler_name1,file_handler_name2
[formatters]
keys=console_formatter,file_formatter
[logger_root]
level=DEBUG
handlers=console_handler,file_handler_name1
[logger_mesg]
level=DEBUG
handlers=console_handler,file_handler_name1,file_handler_name2
propagate=0
qualname=msg
[logger_conn]
level=DEBUG
handlers=console_handler,file_handler_name1,file_handler_name2
propagate=0
qualname=conn
[logger_data]
level=DEBUG
handlers=console_handler,file_handler_name1,file_handler_name2
propagate=0
qualname=data
[logger_mqtt]
level=DEBUG
handlers=console_handler,file_handler_name1,file_handler_name2
propagate=0
qualname=mqtt
[logger_tracer]
level=INFO
handlers=file_handler_name2
propagate=0
qualname=tracer
[handler_console_handler]
class=StreamHandler
level=INFO
formatter=console_formatter
[handler_file_handler_name1]
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=file_formatter
args=('log/proxy.log', when:='midnight')
[handler_file_handler_name2]
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=file_formatter
args=('log/trace.log', when:='midnight')
[formatter_console_formatter]
format=%(asctime)s %(levelname)5s | %(name)4s | %(message)s'
datefmt='%d-%m-%Y %H:%M:%S
[formatter_file_formatter]
format=%(asctime)s %(levelname)5s | %(name)4s | %(message)s'
datefmt='%d-%m-%Y %H:%M:%S

296
app/src/messages.py Normal file
View File

@@ -0,0 +1,296 @@
import struct, logging, time, datetime
import weakref
from config import Config
from datetime import datetime
if __name__ == "app.src.messages":
from app.src.infos import Infos
else:
from infos import Infos
logger = logging.getLogger('msg')
def hex_dump_memory(level, info, data, num):
s = ''
n = 0
lines = []
lines.append(info)
tracer = logging.getLogger('tracer')
#data = list((num * ctypes.c_byte).from_address(ptr))
if len(data) == 0:
return '<empty>'
for i in range(0, num, 16):
line = ' '
line += '%04x | ' % (i)
n += 16
for j in range(n-16, n):
if j >= len(data): break
line += '%02x ' % abs(data[j])
line += ' ' * (3 * 16 + 9 - len(line)) + ' | '
for j in range(n-16, n):
if j >= len(data): break
c = data[j] if not (data[j] < 0x20 or data[j] > 0x7e) else '.'
line += '%c' % c
lines.append(line)
tracer.log(level, '\n'.join(lines))
#return '\n'.join(lines)
class Control:
def __init__(self, ctrl:int):
self.ctrl = ctrl
def __int__(self) -> int:
return self.ctrl
def is_ind(self) -> bool:
return not (self.ctrl & 0x08)
#def is_req(self) -> bool:
# return not (self.ctrl & 0x08)
def is_resp(self) -> bool:
return self.ctrl & 0x08
class IterRegistry(type):
def __iter__(cls):
for ref in cls._registry:
obj = ref()
if obj is not None: yield obj
class Message(metaclass=IterRegistry):
_registry = []
def __init__(self):
self._registry.append(weakref.ref(self))
self.header_valid = False
self.header_len = 0
self.data_len = 0
self._recv_buffer = b''
self._send_buffer = bytearray(0)
self._forward_buffer = bytearray(0)
self.db = Infos()
self.new_data = {}
self.switch={
0x00: self.msg_contact_info,
0x22: self.msg_get_time,
0x71: self.msg_collector_data,
0x04: self.msg_inverter_data,
}
'''
Empty methods, that have to be implemented in any child class which don't use asyncio
'''
def _read(self) -> None: # read data bytes from socket and copy them to our _recv_buffer
return
'''
Our puplic methods
'''
def read(self) -> None:
self._read()
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if self.header_valid and len(self._recv_buffer) >= (self.header_len+self.data_len):
self.__dispatch_msg()
self.__flush_recv_msg()
return
def forward(self, buffer, buflen) -> None:
tsun = Config.get('tsun')
if tsun['enabled']:
self._forward_buffer = buffer[:buflen]
hex_dump_memory(logging.DEBUG, 'Store for forwarding:', buffer, buflen)
self.__parse_header(self._forward_buffer, len(self._forward_buffer))
fnc = self.switch.get(self.msg_id, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'forwrd') + f' Ctl: {int(self.ctrl):#02x} Msg: {fnc.__name__!r}' )
return
'''
Our private methods
'''
def __flow_str(self, server_side:bool, type:('rx','tx','forwrd', 'drop')):
switch={
'rx': ' <',
'tx': ' >',
'forwrd': '<< ',
'drop': ' xx',
'rxS': '> ',
'txS': '< ',
'forwrdS':' >>',
'dropS': 'xx ',
}
if server_side: type +='S'
return switch.get(type, '???')
def __timestamp(self):
if False:
# utc as epoche
ts = time.time()
else:
# convert localtime in epoche
ts = (datetime.now() - datetime(1970,1,1)).total_seconds()
return round(ts*1000)
# check if there is a complete header in the buffer, parse it
# and set
# self.header_len
# self.data_len
# self.id_str
# self.ctrl
# self.msg_id
#
# if the header is incomplete, than self.header_len is still 0
#
def __parse_header(self, buf:bytes, buf_len:int) -> None:
if (buf_len <5): # enough bytes to read len and id_len?
return
result = struct.unpack_from('!lB', buf, 0)
len = result[0] # len of complete message
id_len = result[1] # len of variable id string
hdr_len = 5+id_len+2
if (buf_len < hdr_len): # enough bytes for complete header?
return
result = struct.unpack_from(f'!{id_len+1}pBB', buf, 4)
# store parsed header values in the class
self.id_str = result[0]
self.ctrl = Control(result[1])
self.msg_id = result[2]
self.data_len = len-id_len-3
self.header_len = hdr_len
self.header_valid = True
return
def __build_header(self, ctrl) -> None:
self.send_msg_ofs = len (self._send_buffer)
self._send_buffer += struct.pack(f'!l{len(self.id_str)+1}pBB', 0, self.id_str, ctrl, self.msg_id)
fnc = self.switch.get(self.msg_id, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'tx') + f' Ctl: {int(self.ctrl):#02x} Msg: {fnc.__name__!r}' )
def __finish_send_msg(self) -> None:
_len = len(self._send_buffer) - self.send_msg_ofs
struct.pack_into('!l',self._send_buffer, self.send_msg_ofs, _len-4)
def __dispatch_msg(self) -> None:
hex_dump_memory(logging.INFO, f'Received from {self.addr}:', self._recv_buffer, self.header_len+self.data_len)
fnc = self.switch.get(self.msg_id, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'rx') + f' Ctl: {int(self.ctrl):#02x} Msg: {fnc.__name__!r}' )
fnc()
def __flush_recv_msg(self) -> None:
self._recv_buffer = self._recv_buffer[(self.header_len+self.data_len):]
self.header_valid = False
'''
Message handler methods
'''
def msg_contact_info(self):
if self.ctrl.is_ind():
self.__build_header(0x99)
self._send_buffer += b'\x01'
self.__finish_send_msg()
elif self.ctrl.is_resp():
return # ignore received response from tsun
self.forward(self._recv_buffer, self.header_len+self.data_len)
def msg_get_time(self):
if self.ctrl.is_ind():
ts = self.__timestamp()
logger.debug(f'time: {ts:08x}')
self.__build_header(0x99)
self._send_buffer += struct.pack('!q', ts)
self.__finish_send_msg()
elif self.ctrl.is_resp():
result = struct.unpack_from(f'!q', self._recv_buffer, self.header_len)
logger.debug(f'tsun-time: {result[0]:08x}')
return # ignore received response from tsun
self.forward(self._recv_buffer, self.header_len+self.data_len)
def parse_msg_header(self):
result = struct.unpack_from('!lB', self._recv_buffer, self.header_len)
data_id = result[0] # len of complete message
id_len = result[1] # len of variable id string
logger.debug(f'Data_ID: {data_id} id_len: {id_len}')
msg_hdr_len= 5+id_len+9
result = struct.unpack_from(f'!{id_len+1}pBq', self._recv_buffer, self.header_len+4)
logger.debug(f'ID: {result[0]} B: {result[1]}')
logger.debug(f'time: {result[2]:08x}')
#logger.info(f'time: {datetime.utcfromtimestamp(result[2]).strftime("%Y-%m-%d %H:%M:%S")}')
return msg_hdr_len
def msg_collector_data(self):
if self.ctrl.is_ind():
self.__build_header(0x99)
self._send_buffer += b'\x01'
self.__finish_send_msg()
elif self.ctrl.is_resp():
return # ignore received response
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.__process_data()
def msg_inverter_data(self):
if self.ctrl.is_ind():
self.__build_header(0x99)
self._send_buffer += b'\x01'
self.__finish_send_msg()
elif self.ctrl.is_resp():
return # ignore received response
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.__process_data()
def __process_data(self):
msg_hdr_len = self.parse_msg_header()
for key, update in self.db.parse(self._recv_buffer[self.header_len + msg_hdr_len:]):
if update: self.new_data[key] = True
def msg_unknown(self):
self.forward(self._recv_buffer, self.header_len+self.data_len)
def __del__ (self):
logger.debug ("Messages __del__")

65
app/src/mqtt.py Normal file
View File

@@ -0,0 +1,65 @@
import asyncio, logging
import aiomqtt
from config import Config
logger_mqtt = logging.getLogger('mqtt')
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
logger_mqtt.debug(f'singleton: __call__')
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class Mqtt(metaclass=Singleton):
client = None
def __init__(self):
logger_mqtt.debug(f'MQTT: __init__')
loop = asyncio.get_event_loop()
self.task = loop.create_task(self.__loop())
def __del__(self):
logger_mqtt.debug(f'MQTT: __del__')
async def close(self) -> None:
logger_mqtt.debug(f'MQTT: close')
self.task.cancel()
try:
await self.task
except Exception as e:
logging.debug(f"Mqtt.close: exception: {e} ...")
async def publish(self, topic: str, payload: str | bytes | bytearray | int | float | None = None) -> None:
if self.client:
await self.client.publish(topic, payload)
async def __loop(self) -> None:
mqtt = Config.get('mqtt')
ha = Config.get('ha')
logger_mqtt.info(f'start MQTT: host:{mqtt["host"]} port:{mqtt["port"]} user:{mqtt["user"]}')
self.client = aiomqtt.Client(hostname=mqtt['host'], port=mqtt['port'], username=mqtt['user'], password=mqtt['passwd'])
interval = 5 # Seconds
while True:
try:
async with self.client:
async with self.client.messages() as messages:
await self.client.subscribe(f"{ha['auto_conf_prefix']}/status")
async for message in messages:
logger_mqtt.info(f'Home-Assistant Status: {message.payload.decode("UTF-8")}')
except aiomqtt.MqttError:
logger_mqtt.info(f"Connection lost; Reconnecting in {interval} seconds ...")
await asyncio.sleep(interval)
except asyncio.CancelledError:
logger_mqtt.debug(f"MQTT task cancelled")
self.client = None
return

43
app/src/proxy.py Normal file
View File

@@ -0,0 +1,43 @@
import asyncio, logging, traceback
from async_stream import AsyncStream
class Proxy:
def __init__ (proxy, reader, writer, addr):
proxy.ServerStream = AsyncStream(proxy, reader, writer, addr)
proxy.ClientStream = None
async def server_loop(proxy, addr):
logging.info(f'Accept connection from {addr}')
await proxy.ServerStream.loop()
logging.info(f'Close server connection {addr}')
if proxy.ClientStream:
logging.debug ("close client connection")
proxy.ClientStream.close()
async def client_loop(proxy, addr):
await proxy.ClientStream.loop()
logging.info(f'Close client connection {addr}')
proxy.ServerStream.remoteStream = None
proxy.ClientStream = None
async def CreateClientStream (proxy, stream, host, port):
addr = (host, port)
try:
logging.info(f'Connected to {addr}')
connect = asyncio.open_connection(host, port)
reader, writer = await connect
proxy.ClientStream = AsyncStream(proxy, reader, writer, addr, stream, server_side=False)
asyncio.create_task(proxy.client_loop(addr))
except ConnectionRefusedError as error:
logging.info(f'{error}')
except Exception:
logging.error(
f"Proxy: Exception for {addr}:\n"
f"{traceback.format_exc()}")
return proxy.ClientStream
def __del__ (proxy):
logging.debug ("Proxy __del__")

80
app/src/server.py Normal file
View File

@@ -0,0 +1,80 @@
import logging, asyncio, signal, functools, os
#from logging.handlers import TimedRotatingFileHandler
from logging import config
from async_stream import AsyncStream
from proxy import Proxy
from config import Config
from mqtt import Mqtt
async def handle_client(reader, writer):
'''Handles a new incoming connection and starts an async loop'''
addr = writer.get_extra_info('peername')
await Proxy(reader, writer, addr).server_loop(addr)
def handle_SIGTERM(loop):
'''Close all TCP connections and stop the event loop'''
logging.info('Shutdown due to SIGTERM')
#
# first, close all open TCP connections
#
for stream in AsyncStream:
stream.close()
#
# at last, we stop the loop
#
loop.stop()
logging.info('Shutdown complete')
if __name__ == "__main__":
#
# Setup our daily, rotating logger
#
serv_name = os.getenv('SERVICE_NAME', 'proxy')
logging.config.fileConfig('logging.ini')
logging.info(f'Server "{serv_name}" will be started')
# read config file
Config.read()
loop = asyncio.get_event_loop()
# call Mqtt singleton to establisch the connection to the mqtt broker
mqtt = Mqtt()
#
# Register some UNIX Signal handler for a gracefully server shutdown on Docker restart and stop
#
for signame in ('SIGINT','SIGTERM'):
loop.add_signal_handler(getattr(signal, signame), functools.partial(handle_SIGTERM, loop))
#
# Create a task for our listening server. This must be a task! If we call start_server directly out
# of our main task, the eventloop will be blocked and we can't receive and handle the UNIX signals!
#
loop.create_task(asyncio.start_server(handle_client, '0.0.0.0', 5005))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
logging.info ('Close MQTT Task')
loop.run_until_complete(mqtt.close())
mqtt = None # release the last reference to the singleton
logging.info ('Close event loop')
loop.close()
logging.info (f'Finally, exit Server "{serv_name}"')

104
docker-compose.yaml Normal file
View File

@@ -0,0 +1,104 @@
version: '3.0'
services:
####### H O M E - A S S I S T A N T #####
home-assistant:
container_name: home-assistant
#image: homeassistant/home-assistant:latest
image: ghcr.io/home-assistant/home-assistant:stable
restart: unless-stopped
depends_on:
- mqtt
environment:
- TZ=Europe/Brussels
- PUID=1000
- PGID=1000
- UMASK=007
- PACKAGES=iputils
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- FSETID
- FOWNER
- SETGID
- SETUID
- SYS_CHROOT
- KILL
- NET_RAW
- NET_ADMIN
security_opt:
- no-new-privileges
ports:
- 127.0.0.1:8123:8123
volumes:
- ${PROJECT_DIR}./homeassistant/config:/config
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: curl --fail http://0.0.0.0:8123/auth/providers || exit 1
interval: 90s
retries: 5
start_period: 5s
timeout: 15s
# privileged: false
networks:
- outside
####### M Q T T - B R O K E R #####
mqtt:
container_name: mqtt-broker
image: eclipse-mosquitto:2
expose:
- 1883
volumes:
- ${PROJECT_DIR}./mosquitto/config:/mosquitto/config
- ${PROJECT_DIR}./mosquitto/data:/mosquitto/data
networks:
outside:
ipv4_address: 172.28.1.5 # static IP required to receive mDNS traffic
####### T S U N - P R O X Y ######
tsun-proxy:
container_name: tsun-proxy
image: docker.io/sallius/tsun-gen3-proxy:latest
build:
context: https://gitea.allius.de/allius/tsun-gen3-proxy.git#master:app
args:
- UID=1026
restart: unless-stopped
depends_on:
- mqtt
environment:
- TZ=Europe/Brussels
- SERVICE_NAME=tsun-proxy
dns:
- 8.8.8.8
- 4.4.4.4
ports:
- 127.0.0.1:5005:5005
volumes:
- ${PROJECT_DIR}./tsun-proxy/log:/home/tsun-proxy/log
- ${PROJECT_DIR}./tsun-proxy/config:/home/tsun-proxy/config
networks:
- outside
####### N E T W O R K S ######
networks:
outside:
name: home-assistant
external: true
ipam:
driver: default
config:
- subnet: 172.28.1.0/26
ip_range: 172.28.1.32/27
gateway: 172.28.1.62