* S allius/issue117 (#118)

* add shutdown flag

* add more register definitions

* add start commando for client side connections

* add first support for port 8899

* fix shutdown

* add client_mode configuration

* read client_mode config to setup inverter connections

* add client_mode connections over port 8899

* add preview build

* Update README.md

describe the new client-mode over port 8899 for GEN3PLUS

* MODBUS: the last digit of the inverter version is a hexadecimal number (#121)

* S allius/issue117 (#122)

* add shutdown flag

* add more register definitions

* add start commando for client side connections

* add first support for port 8899

* fix shutdown

* add client_mode configuration

* read client_mode config to setup inverter connections

* add client_mode connections over port 8899

* add preview build

* add documentation for client_mode

* catch os error and log thme with DEBUG level

* update changelog

* make the maximum output coefficient configurable (#124)

* S allius/issue120 (#126)

* add config option to disable the modbus polling

* read more modbus regs in polling mode

* extend connection timeouts if polling mode is disabled

* update changelog

* S allius/issue125 (#127)

* fix linter warning

* move sequence diagramm to wiki

* catch asyncio.CancelledError

* S allius/issue128 (#130)

* set Register.NO_INPUTS fix to 4 for GEN3PLUS

* don't set Register.NO_INPUTS per MODBUS

* fix unit tests

* register OUTPUT_COEFFICIENT at HA

* update changelog

* - Home Assistant: improve inverter status value texts

* - GEN3: add inverter status

* on closing send outstanding MQTT data to the broker

* force MQTT publish on every conn open and close

* reset inverter state on close

- workaround which reset the inverter status to
  offline when the inverter has a very low
  output power on connection close

* improve client modified
- reduce the polling cadence to 30s
- set controller statistics for HA

* client mode set controller IP for HA

* S allius/issue131 (#132)

* Make __publish_outstanding_mqtt public

* update proxy counter

- on client mode connection establishment or
  disconnecting update tje counection counter

* Update README.md (#133)

* reset inverter state on close

- workaround which reset the inverter status to
  offline when the inverter has a very low
  output power on connection close

* S allius/issue134 (#135)

* add polling invertval and method ha_remove()

* add client_mode arg to constructors

- add PollingInvervall

* hide some topics in client mode

- we hide topics in HA by sending an empty register
  MQTT topic during HA auto configuration

* add client_mode value

* update class diagram

* fix modbus close handler

- fix empty call and cleanup que
- add unit test

* don't sent an initial 1710 msg in client mode

* change HA icon for inverter status

* increase test coverage

* accelerate timer tests

* bump aiomqtt and schema to latest release (#137)

* MQTT timestamps and protocol improvements (#140)

* add TS_INPUT, TS_GRID and TS_TOTAL

* prepare MQTT timestamps

- add _set_mqtt_timestamp method
- fix hexdump printing

* push dev and debug images to docker.io

* add unix epoche timestamp for MQTT pakets

* set timezone for unit tests

* set name für setting timezone step

* trigger new action

* GEN3 and GEN3PLUS: handle multiple message

- read: iterate over the receive buffer
- forward: append messages to the forward buffer
- _update_header: iterate over the forward buffer

* GEN3: optimize timeout handling

- longer timeout in state init and reveived
- got to state pending only from state up

* update changelog

* cleanup

* print coloured logs

* Create sonarcloud.yml (#143)

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* build multi arch images with sboms (#146)

* don't send MODBUS request when state is not up (#147)

* adapt timings

* don't send MODBUS request when state is note up

* adapt unit test

* make test code more clean (#148)

* Make test code more clean (#149)

* cleanup

* Code coverage for SonarCloud (#150)


* cleanup code and unit tests

* add test coverage for SonarCloud

* configure SonarCloud

* update changelog

* Do no build on *.yml changes

* prepare release 0.10.0

* disable MODBUS_POLLING for GEN§PLUS in example config

* bump aiohttp to version 3.10.2

* code cleanup

* Fetch all history for all tags and branches
This commit is contained in:
Stefan Allius
2024-08-09 23:16:47 +02:00
committed by GitHub
parent a42ba8a8c6
commit b364fb3f8e
37 changed files with 2096 additions and 1355 deletions

View File

@@ -1,5 +1,4 @@
import struct
# import json
import logging
import time
import asyncio
@@ -19,7 +18,6 @@ else: # pragma: no cover
from my_timer import Timer
from gen3plus.infos_g3p import InfosG3P
from infos import Register
# import traceback
logger = logging.getLogger('msg')
@@ -54,18 +52,24 @@ class SolarmanV5(Message):
AT_CMD = 1
MB_RTU_CMD = 2
MB_START_TIMEOUT = 40
'''start delay for Modbus polling in server mode'''
MB_REGULAR_TIMEOUT = 60
'''regular Modbus polling time in server mode'''
MB_CLIENT_DATA_UP = 30
'''Data up time in client mode'''
def __init__(self, server_side: bool):
def __init__(self, server_side: bool, client_mode: bool):
super().__init__(server_side, self.send_modbus_cb, mb_timeout=5)
self.header_len = 11 # overwrite construcor in class Message
self.control = 0
self.seq = Sequence(server_side)
self.snr = 0
self.db = InfosG3P()
self.db = InfosG3P(client_mode)
self.time_ofs = 0
self.forward_at_cmd_resp = False
self.no_forwarding = False
'''not allowed to connect to TSUN cloud by connection type'''
self.switch = {
0x4210: self.msg_data_ind, # real time data
@@ -128,12 +132,24 @@ class SolarmanV5(Message):
self.node_id = 'G3P' # will be overwritten in __set_serial_no
self.mb_timer = Timer(self.mb_timout_cb, self.node_id)
self.mb_timeout = self.MB_REGULAR_TIMEOUT
self.mb_start_timeout = self.MB_START_TIMEOUT
'''timer value for next Modbus polling request'''
self.modbus_polling = False
'''
Our puplic methods
'''
def close(self) -> None:
logging.debug('Solarman.close()')
if self.server_side:
# set inverter state to offline, if output power is very low
logging.debug('close power: '
f'{self.db.get_db_value(Register.OUTPUT_POWER, -1)}')
if self.db.get_db_value(Register.OUTPUT_POWER, 999) < 2:
self.db.set_db_def_value(Register.INVERTER_STATUS, 0)
self.new_data['env'] = True
# we have references to methods of this class in self.switch
# so we have to erase self.switch, otherwise this instance can't be
# deallocated by the garbage collector ==> we get a memory leak
@@ -143,6 +159,31 @@ class SolarmanV5(Message):
self.mb_timer.close()
super().close()
async def send_start_cmd(self, snr: int, host: str,
start_timeout=MB_CLIENT_DATA_UP):
self.no_forwarding = True
self.snr = snr
self.__set_serial_no(snr)
self.mb_timeout = start_timeout
self.db.set_db_def_value(Register.IP_ADDRESS, host)
self.db.set_db_def_value(Register.POLLING_INTERVAL,
self.mb_timeout)
self.db.set_db_def_value(Register.HEARTBEAT_INTERVAL,
120) # fixme
self.new_data['controller'] = True
self.state = State.up
self._send_modbus_cmd(Modbus.READ_REGS, 0x3000, 48, logging.DEBUG)
self.mb_timer.start(self.mb_timeout)
def new_state_up(self):
if self.state is not State.up:
self.state = State.up
if (self.modbus_polling):
self.mb_timer.start(self.mb_start_timeout)
self.db.set_db_def_value(Register.POLLING_INTERVAL,
self.mb_timeout)
def __set_serial_no(self, snr: int):
serial_no = str(snr)
if self.unique_id == serial_no:
@@ -159,6 +200,7 @@ class SolarmanV5(Message):
found = True
self.node_id = inv['node_id']
self.sug_area = inv['suggested_area']
self.modbus_polling = inv['modbus_polling']
logger.debug(f'SerialNo {serial_no} allowed! area:{self.sug_area}') # noqa: E501
self.db.set_pv_module_details(inv)
@@ -175,50 +217,47 @@ class SolarmanV5(Message):
self.unique_id = serial_no
def read(self) -> float:
'''process all received messages in the _recv_buffer'''
self._read()
while True:
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if self.header_valid and len(self._recv_buffer) >= \
(self.header_len + self.data_len+2):
log_lvl = self.log_lvl.get(self.control, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
hex_dump_memory(log_lvl, f'Received from {self.addr}:',
self._recv_buffer, self.header_len +
self.data_len+2)
if self.__trailer_is_ok(self._recv_buffer, self.header_len
+ self.data_len + 2):
if self.state == State.init:
self.state = State.received
if self.header_valid and len(self._recv_buffer) >= (self.header_len +
self.data_len+2):
log_lvl = self.log_lvl.get(self.control, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
hex_dump_memory(log_lvl, f'Received from {self.addr}:',
self._recv_buffer, self.header_len+self.data_len+2)
if self.__trailer_is_ok(self._recv_buffer, self.header_len
+ self.data_len + 2):
if self.state == State.init:
self.state = State.received
self.__set_serial_no(self.snr)
self.__dispatch_msg()
self.__flush_recv_msg()
return 0 # wait 0s before sending a response
self.__set_serial_no(self.snr)
self.__dispatch_msg()
self.__flush_recv_msg()
else:
return 0 # wait 0s before sending a response
def forward(self, buffer, buflen) -> None:
'''add the actual receive msg to the forwarding queue'''
if self.no_forwarding:
return
tsun = Config.get('solarman')
if tsun['enabled']:
self._forward_buffer = buffer[:buflen]
self._forward_buffer += buffer[:buflen]
hex_dump_memory(logging.DEBUG, 'Store for forwarding:',
buffer, buflen)
self.__parse_header(self._forward_buffer,
len(self._forward_buffer))
fnc = self.switch.get(self.control, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'forwrd') +
f' Ctl: {int(self.control):#04x}'
f' Msg: {fnc.__name__!r}')
return
def _init_new_client_conn(self) -> bool:
# self.__build_header(0x91)
# self._send_buffer += struct.pack(f'!{len(contact_name)+1}p'
# f'{len(contact_mail)+1}p',
# contact_name, contact_mail)
# self.__finish_send_msg()
return False
'''
@@ -270,7 +309,6 @@ class SolarmanV5(Message):
self._recv_buffer = bytearray()
return
self.header_valid = True
return
def __trailer_is_ok(self, buf: bytes, buf_len: int) -> bool:
crc = buf[self.data_len+11]
@@ -320,13 +358,17 @@ class SolarmanV5(Message):
'''update header for message before forwarding,
set sequence and checksum'''
_len = len(_forward_buffer)
struct.pack_into('<H', _forward_buffer, 1,
_len-13)
struct.pack_into('<H', _forward_buffer, 5,
self.seq.get_send())
ofs = 0
while ofs < _len:
result = struct.unpack_from('<BH', _forward_buffer, ofs)
data_len = result[1] # len of variable id string
check = sum(_forward_buffer[1:_len-2]) & 0xff
struct.pack_into('<B', _forward_buffer, _len-2, check)
struct.pack_into('<H', _forward_buffer, ofs+5,
self.seq.get_send())
check = sum(_forward_buffer[ofs+1:ofs+data_len+11]) & 0xff
struct.pack_into('<B', _forward_buffer, ofs+data_len+11, check)
ofs += (13 + data_len)
def __dispatch_msg(self) -> None:
fnc = self.switch.get(self.control, self.msg_unknown)
@@ -378,27 +420,27 @@ class SolarmanV5(Message):
self._send_modbus_cmd(func, addr, val, log_lvl)
def mb_timout_cb(self, exp_cnt):
self.mb_timer.start(self.MB_REGULAR_TIMEOUT)
self.mb_timer.start(self.mb_timeout)
self._send_modbus_cmd(Modbus.READ_REGS, 0x3008, 21, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x3000, 48, logging.DEBUG)
if 0 == (exp_cnt % 30):
if 1 == (exp_cnt % 30):
# logging.info("Regular Modbus Status request")
self._send_modbus_cmd(Modbus.READ_REGS, 0x2007, 2, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x2000, 96, logging.DEBUG)
def at_cmd_forbidden(self, cmd: str, connection: str) -> bool:
return not cmd.startswith(tuple(self.at_acl[connection]['allow'])) or \
cmd.startswith(tuple(self.at_acl[connection]['block']))
async def send_at_cmd(self, AT_cmd: str) -> None:
async def send_at_cmd(self, at_cmd: str) -> None:
if self.state != State.up:
logger.warning(f'[{self.node_id}] ignore AT+ cmd,'
' as the state is not UP')
return
AT_cmd = AT_cmd.strip()
at_cmd = at_cmd.strip()
if self.at_cmd_forbidden(cmd=AT_cmd, connection='mqtt'):
data_json = f'\'{AT_cmd}\' is forbidden'
if self.at_cmd_forbidden(cmd=at_cmd, connection='mqtt'):
data_json = f'\'{at_cmd}\' is forbidden'
node_id = self.node_id
key = 'at_resp'
logger.info(f'{key}: {data_json}')
@@ -407,8 +449,8 @@ class SolarmanV5(Message):
self.forward_at_cmd_resp = False
self.__build_header(0x4510)
self._send_buffer += struct.pack(f'<BHLLL{len(AT_cmd)}sc', self.AT_CMD,
2, 0, 0, 0, AT_cmd.encode('utf-8'),
self._send_buffer += struct.pack(f'<BHLLL{len(at_cmd)}sc', self.AT_CMD,
2, 0, 0, 0, at_cmd.encode('utf-8'),
b'\r')
self.__finish_send_msg()
try:
@@ -421,21 +463,21 @@ class SolarmanV5(Message):
def __build_model_name(self):
db = self.db
MaxPow = db.get_db_value(Register.MAX_DESIGNED_POWER, 0)
Rated = db.get_db_value(Register.RATED_POWER, 0)
Model = None
if MaxPow == 2000:
if Rated == 800 or Rated == 600:
Model = f'TSOL-MS{MaxPow}({Rated})'
max_pow = db.get_db_value(Register.MAX_DESIGNED_POWER, 0)
rated = db.get_db_value(Register.RATED_POWER, 0)
model = None
if max_pow == 2000:
if rated == 800 or rated == 600:
model = f'TSOL-MS{max_pow}({rated})'
else:
Model = f'TSOL-MS{MaxPow}'
elif MaxPow == 1800 or MaxPow == 1600:
Model = f'TSOL-MS{MaxPow}'
if Model:
logger.info(f'Model: {Model}')
self.db.set_db_def_value(Register.EQUIPMENT_MODEL, Model)
model = f'TSOL-MS{max_pow}'
elif max_pow == 1800 or max_pow == 1600:
model = f'TSOL-MS{max_pow}'
if model:
logger.info(f'Model: {model}')
self.db.set_db_def_value(Register.EQUIPMENT_MODEL, model)
def __process_data(self, ftype):
def __process_data(self, ftype, ts):
inv_update = False
msg_type = self.control >> 8
for key, update in self.db.parse(self._recv_buffer, msg_type, ftype,
@@ -443,6 +485,7 @@ class SolarmanV5(Message):
if update:
if key == 'inverter':
inv_update = True
self._set_mqtt_timestamp(key, ts)
self.new_data[key] = True
if inv_update:
@@ -459,16 +502,18 @@ class SolarmanV5(Message):
data = self._recv_buffer[self.header_len:]
result = struct.unpack_from('<BLLL', data, 0)
ftype = result[0] # always 2
# total = result[1]
total = result[1]
tim = result[2]
res = result[3] # always zero
logger.info(f'frame type:{ftype:02x}'
f' timer:{tim:08x}s null:{res}')
# if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
self.__process_data(ftype)
if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
ts = total + self.time_ofs
else:
ts = None
self.__process_data(ftype, ts)
self.__forward_msg()
self.__send_ack_rsp(0x1110, ftype)
@@ -476,7 +521,7 @@ class SolarmanV5(Message):
data = self._recv_buffer
result = struct.unpack_from('<BHLLLHL', data, self.header_len)
ftype = result[0] # 1 or 0x81
# total = result[2]
total = result[2]
tim = result[3]
if 1 == ftype:
self.time_ofs = result[4]
@@ -484,16 +529,17 @@ class SolarmanV5(Message):
cnt = result[6]
logger.info(f'ftype:{ftype:02x} timer:{tim:08x}s'
f' ??: {unkn:04x} cnt:{cnt}')
# if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
ts = total + self.time_ofs
else:
ts = None
self.__process_data(ftype)
self.__process_data(ftype, ts)
self.__forward_msg()
self.__send_ack_rsp(0x1210, ftype)
if self.state is not State.up:
self.state = State.up
self.mb_timer.start(self.MB_START_TIMEOUT)
self.new_state_up()
def msg_sync_start(self):
data = self._recv_buffer[self.header_len:]
@@ -514,17 +560,17 @@ class SolarmanV5(Message):
result = struct.unpack_from('<B', data, 0)
ftype = result[0]
if ftype == self.AT_CMD:
AT_cmd = data[15:].decode()
if self.at_cmd_forbidden(cmd=AT_cmd, connection='tsun'):
at_cmd = data[15:].decode()
if self.at_cmd_forbidden(cmd=at_cmd, connection='tsun'):
self.inc_counter('AT_Command_Blocked')
return
self.inc_counter('AT_Command')
self.forward_at_cmd_resp = True
elif ftype == self.MB_RTU_CMD:
if self.remoteStream.mb.recv_req(data[15:],
self.remoteStream.
__forward_msg):
if self.remote_stream.mb.recv_req(data[15:],
self.remote_stream.
__forward_msg):
self.inc_counter('Modbus_Command')
else:
logger.error('Invalid Modbus Msg')
@@ -533,7 +579,7 @@ class SolarmanV5(Message):
self.__forward_msg()
def publish_mqtt(self, key, data):
def publish_mqtt(self, key, data): # pragma: no cover
asyncio.ensure_future(
self.mqtt.publish(key, data))
@@ -543,9 +589,9 @@ class SolarmanV5(Message):
if self.forward_at_cmd_resp:
return logging.INFO
return logging.DEBUG
elif ftype == self.MB_RTU_CMD:
if self.server_side:
return self.mb.last_log_lvl
elif ftype == self.MB_RTU_CMD \
and self.server_side:
return self.mb.last_log_lvl
return logging.WARNING
@@ -576,6 +622,7 @@ class SolarmanV5(Message):
if update:
if key == 'inverter':
inv_update = True
self._set_mqtt_timestamp(key, self._timestamp())
self.new_data[key] = True
if inv_update:
@@ -590,9 +637,7 @@ class SolarmanV5(Message):
self.__forward_msg()
self.__send_ack_rsp(0x1710, ftype)
if self.state is not State.up:
self.state = State.up
self.mb_timer.start(self.MB_START_TIMEOUT)
self.new_state_up()
def msg_sync_end(self):
data = self._recv_buffer[self.header_len:]