* S allius/issue117 (#118)

* add shutdown flag

* add more register definitions

* add start commando for client side connections

* add first support for port 8899

* fix shutdown

* add client_mode configuration

* read client_mode config to setup inverter connections

* add client_mode connections over port 8899

* add preview build

* Update README.md

describe the new client-mode over port 8899 for GEN3PLUS

* MODBUS: the last digit of the inverter version is a hexadecimal number (#121)

* S allius/issue117 (#122)

* add shutdown flag

* add more register definitions

* add start commando for client side connections

* add first support for port 8899

* fix shutdown

* add client_mode configuration

* read client_mode config to setup inverter connections

* add client_mode connections over port 8899

* add preview build

* add documentation for client_mode

* catch os error and log thme with DEBUG level

* update changelog

* make the maximum output coefficient configurable (#124)

* S allius/issue120 (#126)

* add config option to disable the modbus polling

* read more modbus regs in polling mode

* extend connection timeouts if polling mode is disabled

* update changelog

* S allius/issue125 (#127)

* fix linter warning

* move sequence diagramm to wiki

* catch asyncio.CancelledError

* S allius/issue128 (#130)

* set Register.NO_INPUTS fix to 4 for GEN3PLUS

* don't set Register.NO_INPUTS per MODBUS

* fix unit tests

* register OUTPUT_COEFFICIENT at HA

* update changelog

* - Home Assistant: improve inverter status value texts

* - GEN3: add inverter status

* on closing send outstanding MQTT data to the broker

* force MQTT publish on every conn open and close

* reset inverter state on close

- workaround which reset the inverter status to
  offline when the inverter has a very low
  output power on connection close

* improve client modified
- reduce the polling cadence to 30s
- set controller statistics for HA

* client mode set controller IP for HA

* S allius/issue131 (#132)

* Make __publish_outstanding_mqtt public

* update proxy counter

- on client mode connection establishment or
  disconnecting update tje counection counter

* Update README.md (#133)

* reset inverter state on close

- workaround which reset the inverter status to
  offline when the inverter has a very low
  output power on connection close

* S allius/issue134 (#135)

* add polling invertval and method ha_remove()

* add client_mode arg to constructors

- add PollingInvervall

* hide some topics in client mode

- we hide topics in HA by sending an empty register
  MQTT topic during HA auto configuration

* add client_mode value

* update class diagram

* fix modbus close handler

- fix empty call and cleanup que
- add unit test

* don't sent an initial 1710 msg in client mode

* change HA icon for inverter status

* increase test coverage

* accelerate timer tests

* bump aiomqtt and schema to latest release (#137)

* MQTT timestamps and protocol improvements (#140)

* add TS_INPUT, TS_GRID and TS_TOTAL

* prepare MQTT timestamps

- add _set_mqtt_timestamp method
- fix hexdump printing

* push dev and debug images to docker.io

* add unix epoche timestamp for MQTT pakets

* set timezone for unit tests

* set name für setting timezone step

* trigger new action

* GEN3 and GEN3PLUS: handle multiple message

- read: iterate over the receive buffer
- forward: append messages to the forward buffer
- _update_header: iterate over the forward buffer

* GEN3: optimize timeout handling

- longer timeout in state init and reveived
- got to state pending only from state up

* update changelog

* cleanup

* print coloured logs

* Create sonarcloud.yml (#143)

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* Update sonarcloud.yml

* build multi arch images with sboms (#146)

* don't send MODBUS request when state is not up (#147)

* adapt timings

* don't send MODBUS request when state is note up

* adapt unit test

* make test code more clean (#148)

* Make test code more clean (#149)

* cleanup

* Code coverage for SonarCloud (#150)


* cleanup code and unit tests

* add test coverage for SonarCloud

* configure SonarCloud

* update changelog

* Do no build on *.yml changes

* prepare release 0.10.0

* disable MODBUS_POLLING for GEN§PLUS in example config

* bump aiohttp to version 3.10.2

* code cleanup

* Fetch all history for all tags and branches
This commit is contained in:
Stefan Allius
2024-08-09 23:16:47 +02:00
committed by GitHub
parent a42ba8a8c6
commit b364fb3f8e
37 changed files with 2096 additions and 1355 deletions

View File

@@ -17,10 +17,10 @@ class AsyncStream():
'''maximum processing time for a received msg in sec'''
MAX_START_TIME = 400
'''maximum time without a received msg in sec'''
MAX_INV_IDLE_TIME = 90
MAX_INV_IDLE_TIME = 120
'''maximum time without a received msg from the inverter in sec'''
MAX_CLOUD_IDLE_TIME = 360
'''maximum time without a received msg from cloud side in sec'''
MAX_DEF_IDLE_TIME = 360
'''maximum default time without a received msg in sec'''
def __init__(self, reader: StreamReader, writer: StreamWriter,
addr) -> None:
@@ -35,43 +35,50 @@ class AsyncStream():
self.proc_max = 0
def __timeout(self) -> int:
if self.state == State.init:
if self.state == State.init or self.state == State.received:
to = self.MAX_START_TIME
elif self.state == State.up and \
self.server_side and self.modbus_polling:
to = self.MAX_INV_IDLE_TIME
else:
if self.server_side:
to = self.MAX_INV_IDLE_TIME
else:
to = self.MAX_CLOUD_IDLE_TIME
to = self.MAX_DEF_IDLE_TIME
return to
async def publish_outstanding_mqtt(self):
'''Publish all outstanding MQTT topics'''
try:
if self.unique_id:
await self.async_publ_mqtt()
await self._async_publ_mqtt_proxy_stat('proxy')
except Exception:
pass
async def server_loop(self, addr: str) -> None:
'''Loop for receiving messages from the inverter (server-side)'''
logger.info(f'[{self.node_id}:{self.conn_no}] '
f'Accept connection from {addr}')
self.inc_counter('Inverter_Cnt')
await self.publish_outstanding_mqtt()
await self.loop()
self.dec_counter('Inverter_Cnt')
await self.publish_outstanding_mqtt()
logger.info(f'[{self.node_id}:{self.conn_no}] Server loop stopped for'
f' r{self.r_addr}')
# if the server connection closes, we also have to disconnect
# the connection to te TSUN cloud
if self.remoteStream:
if self.remote_stream:
logger.info(f'[{self.node_id}:{self.conn_no}] disc client '
f'connection: [{self.remoteStream.node_id}:'
f'{self.remoteStream.conn_no}]')
await self.remoteStream.disc()
try:
await self._async_publ_mqtt_proxy_stat('proxy')
except Exception:
pass
f'connection: [{self.remote_stream.node_id}:'
f'{self.remote_stream.conn_no}]')
await self.remote_stream.disc()
async def client_loop(self, addr: str) -> None:
async def client_loop(self, _: str) -> None:
'''Loop for receiving messages from the TSUN cloud (client-side)'''
clientStream = await self.remoteStream.loop()
logger.info(f'[{clientStream.node_id}:{clientStream.conn_no}] '
client_stream = await self.remote_stream.loop()
logger.info(f'[{client_stream.node_id}:{client_stream.conn_no}] '
'Client loop stopped for'
f' l{clientStream.l_addr}')
f' l{client_stream.l_addr}')
# if the client connection closes, we don't touch the server
# connection. Instead we erase the client connection stream,
@@ -79,13 +86,13 @@ class AsyncStream():
# establish a new connection to the TSUN cloud
# erase backlink to inverter
clientStream.remoteStream = None
client_stream.remote_stream = None
if self.remoteStream == clientStream:
# logging.debug(f'Client l{clientStream.l_addr} refs:'
# f' {gc.get_referrers(clientStream)}')
if self.remote_stream == client_stream:
# logging.debug(f'Client l{client_stream.l_addr} refs:'
# f' {gc.get_referrers(client_stream)}')
# than erase client connection
self.remoteStream = None
self.remote_stream = None
async def loop(self) -> Self:
"""Async loop handler for precessing all received messages"""
@@ -196,35 +203,35 @@ class AsyncStream():
if not self._forward_buffer:
return
try:
if not self.remoteStream:
if not self.remote_stream:
await self.async_create_remote()
if self.remoteStream:
if self.remoteStream._init_new_client_conn():
await self.remoteStream.async_write()
if self.remote_stream:
if self.remote_stream._init_new_client_conn():
await self.remote_stream.async_write()
if self.remoteStream:
self.remoteStream._update_header(self._forward_buffer)
if self.remote_stream:
self.remote_stream._update_header(self._forward_buffer)
hex_dump_memory(logging.INFO,
f'Forward to {self.remoteStream.addr}:',
f'Forward to {self.remote_stream.addr}:',
self._forward_buffer,
len(self._forward_buffer))
self.remoteStream.writer.write(self._forward_buffer)
await self.remoteStream.writer.drain()
self.remote_stream.writer.write(self._forward_buffer)
await self.remote_stream.writer.drain()
self._forward_buffer = bytearray(0)
except OSError as error:
if self.remoteStream:
rmt = self.remoteStream
self.remoteStream = None
if self.remote_stream:
rmt = self.remote_stream
self.remote_stream = None
logger.error(f'[{rmt.node_id}:{rmt.conn_no}] Fwd: {error} for '
f'l{rmt.l_addr} | r{rmt.r_addr}')
await rmt.disc()
rmt.close()
except RuntimeError as error:
if self.remoteStream:
rmt = self.remoteStream
self.remoteStream = None
if self.remote_stream:
rmt = self.remote_stream
self.remote_stream = None
logger.info(f'[{rmt.node_id}:{rmt.conn_no}] '
f'Fwd: {error} for {rmt.l_addr}')
await rmt.disc()

View File

@@ -53,7 +53,12 @@ class Config():
Use(lambda s: s + '/'
if len(s) > 0 and
s[-1] != '/' else s)),
Optional('client_mode'): {
'host': Use(str),
Optional('port', default=8899):
And(Use(int), lambda n: 1024 <= n <= 65535)
},
Optional('modbus_polling', default=True): Use(bool),
Optional('suggested_area', default=""): Use(str),
Optional('pv1'): {
Optional('type'): Use(str),

View File

@@ -1,5 +1,4 @@
import logging
# import gc
from asyncio import StreamReader, StreamWriter
from async_stream import AsyncStream
from gen3.talent import Talent
@@ -15,7 +14,7 @@ class ConnectionG3(AsyncStream, Talent):
AsyncStream.__init__(self, reader, writer, addr)
Talent.__init__(self, server_side, id_str)
self.remoteStream: 'ConnectionG3' = remote_stream
self.remote_stream: 'ConnectionG3' = remote_stream
'''
Our puplic methods
@@ -26,10 +25,10 @@ class ConnectionG3(AsyncStream, Talent):
# logger.info(f'AsyncStream refs: {gc.get_referrers(self)}')
async def async_create_remote(self) -> None:
pass
pass # virtual interface
async def async_publ_mqtt(self) -> None:
pass
pass # virtual interface
def healthy(self) -> bool:
logger.debug('ConnectionG3 healthy()')

View File

@@ -31,6 +31,9 @@ class RegisterMap:
0xffffff06: Register.OTA_START_MSG,
0xffffff07: Register.SW_EXCEPTION,
0xffffff08: Register.MAX_DESIGNED_POWER,
0xffffff09: Register.OUTPUT_COEFFICIENT,
0xffffff0a: Register.INVERTER_STATUS,
0xffffff0b: Register.POLLING_INTERVAL,
0xfffffffe: Register.TEST_REG1,
0xffffffff: Register.TEST_REG2,
0x00000640: Register.OUTPUT_POWER,

View File

@@ -9,9 +9,7 @@ from gen3.connection_g3 import ConnectionG3
from aiomqtt import MqttCodeError
from infos import Infos
# import gc
# logger = logging.getLogger('conn')
logger_mqtt = logging.getLogger('mqtt')
@@ -60,10 +58,10 @@ class InverterG3(Inverter, ConnectionG3):
logging.info(f'[{self.node_id}] Connect to {addr}')
connect = asyncio.open_connection(host, port)
reader, writer = await connect
self.remoteStream = ConnectionG3(reader, writer, addr, self,
False, self.id_str)
logging.info(f'[{self.remoteStream.node_id}:'
f'{self.remoteStream.conn_no}] '
self.remote_stream = ConnectionG3(reader, writer, addr, self,
False, self.id_str)
logging.info(f'[{self.remote_stream.node_id}:'
f'{self.remote_stream.conn_no}] '
f'Connected to {addr}')
asyncio.create_task(self.client_loop(addr))

View File

@@ -1,7 +1,8 @@
import struct
import logging
import time
import pytz
from datetime import datetime
from tzlocal import get_localzone
if __name__ == "app.src.gen3.talent":
from app.src.messages import hex_dump_memory, Message, State
@@ -9,12 +10,14 @@ if __name__ == "app.src.gen3.talent":
from app.src.my_timer import Timer
from app.src.config import Config
from app.src.gen3.infos_g3 import InfosG3
from app.src.infos import Register
else: # pragma: no cover
from messages import hex_dump_memory, Message, State
from modbus import Modbus
from my_timer import Timer
from config import Config
from gen3.infos_g3 import InfosG3
from infos import Register
logger = logging.getLogger('msg')
@@ -41,7 +44,7 @@ class Talent(Message):
MB_REGULAR_TIMEOUT = 60
def __init__(self, server_side: bool, id_str=b''):
super().__init__(server_side, self.send_modbus_cb, mb_timeout=11)
super().__init__(server_side, self.send_modbus_cb, mb_timeout=15)
self.await_conn_resp_cnt = 0
self.id_str = id_str
self.contact_name = b''
@@ -71,12 +74,23 @@ class Talent(Message):
self.modbus_elms = 0 # for unit tests
self.node_id = 'G3' # will be overwritten in __set_serial_no
self.mb_timer = Timer(self.mb_timout_cb, self.node_id)
self.mb_timeout = self.MB_REGULAR_TIMEOUT
self.mb_start_timeout = self.MB_START_TIMEOUT
self.modbus_polling = False
'''
Our puplic methods
'''
def close(self) -> None:
logging.debug('Talent.close()')
if self.server_side:
# set inverter state to offline, if output power is very low
logging.debug('close power: '
f'{self.db.get_db_value(Register.OUTPUT_POWER, -1)}')
if self.db.get_db_value(Register.OUTPUT_POWER, 999) < 2:
self.db.set_db_def_value(Register.INVERTER_STATUS, 0)
self.new_data['env'] = True
# we have references to methods of this class in self.switch
# so we have to erase self.switch, otherwise this instance can't be
# deallocated by the garbage collector ==> we get a memory leak
@@ -98,6 +112,7 @@ class Talent(Message):
inv = inverters[serial_no]
self.node_id = inv['node_id']
self.sug_area = inv['suggested_area']
self.modbus_polling = inv['modbus_polling']
logger.debug(f'SerialNo {serial_no} allowed! area:{self.sug_area}') # noqa: E501
self.db.set_pv_module_details(inv)
else:
@@ -113,41 +128,46 @@ class Talent(Message):
self.unique_id = serial_no
def read(self) -> float:
'''process all received messages in the _recv_buffer'''
self._read()
while True:
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if self.header_valid and \
len(self._recv_buffer) >= (self.header_len + self.data_len):
if self.state == State.init:
self.state = State.received # received 1st package
if self.header_valid and len(self._recv_buffer) >= (self.header_len +
self.data_len):
if self.state == State.init:
self.state = State.received # received 1st package
log_lvl = self.log_lvl.get(self.msg_id, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
log_lvl = self.log_lvl.get(self.msg_id, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
hex_dump_memory(log_lvl, f'Received from {self.addr}:'
f' BufLen: {len(self._recv_buffer)}'
f' HdrLen: {self.header_len}'
f' DtaLen: {self.data_len}',
self._recv_buffer, len(self._recv_buffer))
hex_dump_memory(log_lvl, f'Received from {self.addr}:',
self._recv_buffer, self.header_len+self.data_len)
self.__set_serial_no(self.id_str.decode("utf-8"))
self.__dispatch_msg()
self.__flush_recv_msg()
else:
return 0 # don not wait before sending a response
self.__set_serial_no(self.id_str.decode("utf-8"))
self.__dispatch_msg()
self.__flush_recv_msg()
return 0.5 # wait 500ms before sending a response
def forward(self, buffer, buflen) -> None:
def forward(self) -> None:
'''add the actual receive msg to the forwarding queue'''
tsun = Config.get('tsun')
if tsun['enabled']:
self._forward_buffer = buffer[:buflen]
buffer = self._recv_buffer
buflen = self.header_len+self.data_len
self._forward_buffer += buffer[:buflen]
hex_dump_memory(logging.DEBUG, 'Store for forwarding:',
buffer, buflen)
self.__parse_header(self._forward_buffer,
len(self._forward_buffer))
fnc = self.switch.get(self.msg_id, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'forwrd') +
f' Ctl: {int(self.ctrl):#02x} Msg: {fnc.__name__!r}')
return
def send_modbus_cb(self, modbus_pdu: bytearray, log_lvl: int, state: str):
if self.state != State.up:
@@ -177,13 +197,13 @@ class Talent(Message):
self._send_modbus_cmd(func, addr, val, log_lvl)
def mb_timout_cb(self, exp_cnt):
self.mb_timer.start(self.MB_REGULAR_TIMEOUT)
self.mb_timer.start(self.mb_timeout)
if 0 == (exp_cnt % 30):
if 2 == (exp_cnt % 30):
# logging.info("Regular Modbus Status request")
self._send_modbus_cmd(Modbus.READ_REGS, 0x2007, 2, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x2000, 96, logging.DEBUG)
else:
self._send_modbus_cmd(Modbus.READ_REGS, 0x3008, 21, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x3000, 48, logging.DEBUG)
def _init_new_client_conn(self) -> bool:
contact_name = self.contact_name
@@ -218,31 +238,43 @@ class Talent(Message):
return switch.get(type, '???')
def _timestamp(self): # pragma: no cover
if False:
# utc as epoche
ts = time.time()
else:
# convert localtime in epoche
ts = (datetime.now() - datetime(1970, 1, 1)).total_seconds()
'''returns timestamp fo the inverter as localtime
since 1.1.1970 in msec'''
# convert localtime in epoche
ts = (datetime.now() - datetime(1970, 1, 1)).total_seconds()
return round(ts*1000)
def _utcfromts(self, ts: float):
'''converts inverter timestamp into unix time (epoche)'''
dt = datetime.fromtimestamp(ts/1000, pytz.UTC). \
replace(tzinfo=get_localzone())
return dt.timestamp()
def _utc(self): # pragma: no cover
'''returns unix time (epoche)'''
return datetime.now().timestamp()
def _update_header(self, _forward_buffer):
'''update header for message before forwarding,
add time offset to timestamp'''
_len = len(_forward_buffer)
result = struct.unpack_from('!lB', _forward_buffer, 0)
id_len = result[1] # len of variable id string
if _len < 2*id_len + 21:
return
ofs = 0
while ofs < _len:
result = struct.unpack_from('!lB', _forward_buffer, 0)
msg_len = 4 + result[0]
id_len = result[1] # len of variable id string
if _len < 2*id_len + 21:
return
result = struct.unpack_from('!B', _forward_buffer, id_len+6)
msg_code = result[0]
if msg_code == 0x71 or msg_code == 0x04:
result = struct.unpack_from('!q', _forward_buffer, 13+2*id_len)
ts = result[0] + self.ts_offset
logger.debug(f'offset: {self.ts_offset:08x}'
f' proxy-time: {ts:08x}')
struct.pack_into('!q', _forward_buffer, 13+2*id_len, ts)
result = struct.unpack_from('!B', _forward_buffer, id_len+6)
msg_code = result[0]
if msg_code == 0x71 or msg_code == 0x04:
result = struct.unpack_from('!q', _forward_buffer, 13+2*id_len)
ts = result[0] + self.ts_offset
logger.debug(f'offset: {self.ts_offset:08x}'
f' proxy-time: {ts:08x}')
struct.pack_into('!q', _forward_buffer, 13+2*id_len, ts)
ofs += msg_len
# check if there is a complete header in the buffer, parse it
# and set
@@ -259,7 +291,7 @@ class Talent(Message):
if (buf_len < 5): # enough bytes to read len and id_len?
return
result = struct.unpack_from('!lB', buf, 0)
len = result[0] # len of complete message
msg_len = result[0] # len of complete message
id_len = result[1] # len of variable id string
hdr_len = 5+id_len+2
@@ -273,10 +305,9 @@ class Talent(Message):
self.id_str = result[0]
self.ctrl = Control(result[1])
self.msg_id = result[2]
self.data_len = len-id_len-3
self.data_len = msg_len-id_len-3
self.header_len = hdr_len
self.header_valid = True
return
def __build_header(self, ctrl, msg_id=None) -> None:
if not msg_id:
@@ -321,12 +352,11 @@ class Talent(Message):
elif self.await_conn_resp_cnt > 0:
self.await_conn_resp_cnt -= 1
else:
self.forward(self._recv_buffer, self.header_len+self.data_len)
return
self.forward()
else:
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def __process_contact_info(self) -> bool:
result = struct.unpack_from('!B', self._recv_buffer, self.header_len)
@@ -348,8 +378,9 @@ class Talent(Message):
def msg_get_time(self):
if self.ctrl.is_ind():
if self.data_len == 0:
self.state = State.pend # block MODBUS cmds
self.mb_timer.start(self.MB_START_TIMEOUT)
if self.state == State.up:
self.state = State.pend # block MODBUS cmds
ts = self._timestamp()
logger.debug(f'time: {ts:08x}')
self.__build_header(0x91)
@@ -369,7 +400,7 @@ class Talent(Message):
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def parse_msg_header(self):
result = struct.unpack_from('!lB', self._recv_buffer, self.header_len)
@@ -383,11 +414,12 @@ class Talent(Message):
result = struct.unpack_from(f'!{id_len+1}pBq', self._recv_buffer,
self.header_len + 4)
timestamp = result[2]
logger.debug(f'ID: {result[0]} B: {result[1]}')
logger.debug(f'time: {result[2]:08x}')
logger.debug(f'time: {timestamp:08x}')
# logger.info(f'time: {datetime.utcfromtimestamp(result[2]).strftime(
# "%Y-%m-%d %H:%M:%S")}')
return msg_hdr_len
return msg_hdr_len, timestamp
def msg_collector_data(self):
if self.ctrl.is_ind():
@@ -402,7 +434,7 @@ class Talent(Message):
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def msg_inverter_data(self):
if self.ctrl.is_ind():
@@ -411,6 +443,10 @@ class Talent(Message):
self.__finish_send_msg()
self.__process_data()
self.state = State.up # allow MODBUS cmds
if (self.modbus_polling):
self.mb_timer.start(self.mb_start_timeout)
self.db.set_db_def_value(Register.POLLING_INTERVAL,
self.mb_timeout)
elif self.ctrl.is_resp():
return # ignore received response
@@ -418,25 +454,26 @@ class Talent(Message):
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def __process_data(self):
msg_hdr_len = self.parse_msg_header()
msg_hdr_len, ts = self.parse_msg_header()
for key, update in self.db.parse(self._recv_buffer, self.header_len
+ msg_hdr_len, self.node_id):
if update:
self._set_mqtt_timestamp(key, self._utcfromts(ts))
self.new_data[key] = True
def msg_ota_update(self):
if self.ctrl.is_req():
self.inc_counter('OTA_Start_Msg')
elif self.ctrl.is_ind():
pass
pass # Ok, nothing to do
else:
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def parse_modbus_header(self):
@@ -445,27 +482,24 @@ class Talent(Message):
result = struct.unpack_from('!lBB', self._recv_buffer,
self.header_len)
modbus_len = result[1]
# logger.debug(f'Ref: {result[0]}')
# logger.debug(f'Modbus MsgLen: {modbus_len} Func:{result[2]}')
return msg_hdr_len, modbus_len
def get_modbus_log_lvl(self) -> int:
if self.ctrl.is_req():
return logging.INFO
elif self.ctrl.is_ind():
if self.server_side:
return self.mb.last_log_lvl
elif self.ctrl.is_ind() and self.server_side:
return self.mb.last_log_lvl
return logging.WARNING
def msg_modbus(self):
hdr_len, modbus_len = self.parse_modbus_header()
hdr_len, _ = self.parse_modbus_header()
data = self._recv_buffer[self.header_len:
self.header_len+self.data_len]
if self.ctrl.is_req():
if self.remoteStream.mb.recv_req(data[hdr_len:],
self.remoteStream.
msg_forward):
if self.remote_stream.mb.recv_req(data[hdr_len:],
self.remote_stream.
msg_forward):
self.inc_counter('Modbus_Command')
else:
self.inc_counter('Invalid_Msg_Format')
@@ -481,17 +515,18 @@ class Talent(Message):
hdr_len:],
self.node_id):
if update:
self._set_mqtt_timestamp(key, self._utc())
self.new_data[key] = True
self.modbus_elms += 1 # count for unit tests
else:
logger.warning('Unknown Ctrl')
self.inc_counter('Unknown_Ctrl')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def msg_forward(self):
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()
def msg_unknown(self):
logger.warning(f"Unknow Msg: ID:{self.msg_id}")
self.inc_counter('Unknown_Msg')
self.forward(self._recv_buffer, self.header_len+self.data_len)
self.forward()

View File

@@ -1,5 +1,4 @@
import logging
# import gc
from asyncio import StreamReader, StreamWriter
from async_stream import AsyncStream
from gen3plus.solarman_v5 import SolarmanV5
@@ -11,11 +10,12 @@ class ConnectionG3P(AsyncStream, SolarmanV5):
def __init__(self, reader: StreamReader, writer: StreamWriter,
addr, remote_stream: 'ConnectionG3P',
server_side: bool) -> None:
server_side: bool,
client_mode: bool) -> None:
AsyncStream.__init__(self, reader, writer, addr)
SolarmanV5.__init__(self, server_side)
SolarmanV5.__init__(self, server_side, client_mode)
self.remoteStream: 'ConnectionG3P' = remote_stream
self.remote_stream: 'ConnectionG3P' = remote_stream
'''
Our puplic methods
@@ -26,10 +26,10 @@ class ConnectionG3P(AsyncStream, SolarmanV5):
# logger.info(f'AsyncStream refs: {gc.get_referrers(self)}')
async def async_create_remote(self) -> None:
pass
pass # virtual interface
async def async_publ_mqtt(self) -> None:
pass
pass # virtual interface
def healthy(self) -> bool:
logger.debug('ConnectionG3P healthy()')

View File

@@ -3,9 +3,9 @@ import struct
from typing import Generator
if __name__ == "app.src.gen3plus.infos_g3p":
from app.src.infos import Infos, Register
from app.src.infos import Infos, Register, ProxyMode
else: # pragma: no cover
from infos import Infos, Register
from infos import Infos, Register, ProxyMode
class RegisterMap:
@@ -14,15 +14,15 @@ class RegisterMap:
__slots__ = ()
map = {
# 0x41020007: {'reg': Register.DEVICE_SNR, 'fmt': '<L'}, # noqa: E501
0x41020018: {'reg': Register.DATA_UP_INTERVAL, 'fmt': '<B', 'ratio': 60}, # noqa: E501
0x41020019: {'reg': Register.COLLECT_INTERVAL, 'fmt': '<B', 'eval': 'round(result/60)'}, # noqa: E501
0x41020018: {'reg': Register.DATA_UP_INTERVAL, 'fmt': '<B', 'ratio': 60, 'dep': ProxyMode.SERVER}, # noqa: E501
0x41020019: {'reg': Register.COLLECT_INTERVAL, 'fmt': '<B', 'eval': 'round(result/60)', 'dep': ProxyMode.SERVER}, # noqa: E501
0x4102001a: {'reg': Register.HEARTBEAT_INTERVAL, 'fmt': '<B', 'ratio': 1}, # noqa: E501
0x4102001c: {'reg': Register.SIGNAL_STRENGTH, 'fmt': '<B', 'ratio': 1}, # noqa: E501
0x4102001c: {'reg': Register.SIGNAL_STRENGTH, 'fmt': '<B', 'ratio': 1, 'dep': ProxyMode.SERVER}, # noqa: E501
0x4102001e: {'reg': Register.CHIP_MODEL, 'fmt': '!40s'}, # noqa: E501
0x4102004c: {'reg': Register.IP_ADDRESS, 'fmt': '!16s'}, # noqa: E501
0x41020064: {'reg': Register.COLLECTOR_FW_VERSION, 'fmt': '!40s'}, # noqa: E501
0x4201001c: {'reg': Register.POWER_ON_TIME, 'fmt': '<H', 'ratio': 1}, # noqa: E501
0x4201001c: {'reg': Register.POWER_ON_TIME, 'fmt': '<H', 'ratio': 1, 'dep': ProxyMode.SERVER}, # noqa: E501
0x42010020: {'reg': Register.SERIAL_NUMBER, 'fmt': '!16s'}, # noqa: E501
0x420100c0: {'reg': Register.INVERTER_STATUS, 'fmt': '!H'}, # noqa: E501
0x420100d0: {'reg': Register.VERSION, 'fmt': '!H', 'eval': "f'V{(result>>12)}.{(result>>8)&0xf}.{(result>>4)&0xf}{result&0xf}'"}, # noqa: E501
@@ -56,19 +56,31 @@ class RegisterMap:
0x42010110: {'reg': Register.PV4_DAILY_GENERATION, 'fmt': '!H', 'ratio': 0.01}, # noqa: E501
0x42010112: {'reg': Register.PV4_TOTAL_GENERATION, 'fmt': '!L', 'ratio': 0.01}, # noqa: E501
0x42010126: {'reg': Register.MAX_DESIGNED_POWER, 'fmt': '!H', 'ratio': 1}, # noqa: E501
0x42010170: {'reg': Register.NO_INPUTS, 'fmt': '!B'}, # noqa: E501
0xffffff01: {'reg': Register.OUTPUT_COEFFICIENT},
0xffffff02: {'reg': Register.POLLING_INTERVAL},
# 0x4281001c: {'reg': Register.POWER_ON_TIME, 'fmt': '<H', 'ratio': 1}, # noqa: E501
}
class InfosG3P(Infos):
def __init__(self):
def __init__(self, client_mode: bool):
super().__init__()
self.client_mode = client_mode
self.set_db_def_value(Register.MANUFACTURER, 'TSUN')
self.set_db_def_value(Register.EQUIPMENT_MODEL, 'TSOL-MSxx00')
self.set_db_def_value(Register.CHIP_TYPE, 'IGEN TECH')
self.set_db_def_value(Register.NO_INPUTS, 4)
def __hide_topic(self, row: dict) -> bool:
if 'dep' in row:
mode = row['dep']
if self.client_mode:
return mode != ProxyMode.CLIENT
else:
return mode != ProxyMode.SERVER
return False
def ha_confs(self, ha_prfx: str, node_id: str, snr: str,
sug_area: str = '') \
@@ -84,7 +96,10 @@ class InfosG3P(Infos):
# iterate over RegisterMap.map and get the register values
for row in RegisterMap.map.values():
info_id = row['reg']
res = self.ha_conf(info_id, ha_prfx, node_id, snr, False, sug_area) # noqa: E501
if self.__hide_topic(row):
res = self.ha_remove(info_id, node_id, snr) # noqa: E501
else:
res = self.ha_conf(info_id, ha_prfx, node_id, snr, False, sug_area) # noqa: E501
if res:
yield res

View File

@@ -9,9 +9,7 @@ from gen3plus.connection_g3p import ConnectionG3P
from aiomqtt import MqttCodeError
from infos import Infos
# import gc
# logger = logging.getLogger('conn')
logger_mqtt = logging.getLogger('mqtt')
@@ -45,8 +43,10 @@ class InverterG3P(Inverter, ConnectionG3P):
destroyed
'''
def __init__(self, reader: StreamReader, writer: StreamWriter, addr):
super().__init__(reader, writer, addr, None, True)
def __init__(self, reader: StreamReader, writer: StreamWriter, addr,
client_mode: bool = False):
super().__init__(reader, writer, addr, None,
server_side=True, client_mode=client_mode)
self.__ha_restarts = -1
async def async_create_remote(self) -> None:
@@ -60,10 +60,11 @@ class InverterG3P(Inverter, ConnectionG3P):
logging.info(f'[{self.node_id}] Connect to {addr}')
connect = asyncio.open_connection(host, port)
reader, writer = await connect
self.remoteStream = ConnectionG3P(reader, writer, addr, self,
False)
logging.info(f'[{self.remoteStream.node_id}:'
f'{self.remoteStream.conn_no}] '
self.remote_stream = ConnectionG3P(reader, writer, addr, self,
server_side=False,
client_mode=False)
logging.info(f'[{self.remote_stream.node_id}:'
f'{self.remote_stream.conn_no}] '
f'Connected to {addr}')
asyncio.create_task(self.client_loop(addr))

View File

@@ -1,5 +1,4 @@
import struct
# import json
import logging
import time
import asyncio
@@ -19,7 +18,6 @@ else: # pragma: no cover
from my_timer import Timer
from gen3plus.infos_g3p import InfosG3P
from infos import Register
# import traceback
logger = logging.getLogger('msg')
@@ -54,18 +52,24 @@ class SolarmanV5(Message):
AT_CMD = 1
MB_RTU_CMD = 2
MB_START_TIMEOUT = 40
'''start delay for Modbus polling in server mode'''
MB_REGULAR_TIMEOUT = 60
'''regular Modbus polling time in server mode'''
MB_CLIENT_DATA_UP = 30
'''Data up time in client mode'''
def __init__(self, server_side: bool):
def __init__(self, server_side: bool, client_mode: bool):
super().__init__(server_side, self.send_modbus_cb, mb_timeout=5)
self.header_len = 11 # overwrite construcor in class Message
self.control = 0
self.seq = Sequence(server_side)
self.snr = 0
self.db = InfosG3P()
self.db = InfosG3P(client_mode)
self.time_ofs = 0
self.forward_at_cmd_resp = False
self.no_forwarding = False
'''not allowed to connect to TSUN cloud by connection type'''
self.switch = {
0x4210: self.msg_data_ind, # real time data
@@ -128,12 +132,24 @@ class SolarmanV5(Message):
self.node_id = 'G3P' # will be overwritten in __set_serial_no
self.mb_timer = Timer(self.mb_timout_cb, self.node_id)
self.mb_timeout = self.MB_REGULAR_TIMEOUT
self.mb_start_timeout = self.MB_START_TIMEOUT
'''timer value for next Modbus polling request'''
self.modbus_polling = False
'''
Our puplic methods
'''
def close(self) -> None:
logging.debug('Solarman.close()')
if self.server_side:
# set inverter state to offline, if output power is very low
logging.debug('close power: '
f'{self.db.get_db_value(Register.OUTPUT_POWER, -1)}')
if self.db.get_db_value(Register.OUTPUT_POWER, 999) < 2:
self.db.set_db_def_value(Register.INVERTER_STATUS, 0)
self.new_data['env'] = True
# we have references to methods of this class in self.switch
# so we have to erase self.switch, otherwise this instance can't be
# deallocated by the garbage collector ==> we get a memory leak
@@ -143,6 +159,31 @@ class SolarmanV5(Message):
self.mb_timer.close()
super().close()
async def send_start_cmd(self, snr: int, host: str,
start_timeout=MB_CLIENT_DATA_UP):
self.no_forwarding = True
self.snr = snr
self.__set_serial_no(snr)
self.mb_timeout = start_timeout
self.db.set_db_def_value(Register.IP_ADDRESS, host)
self.db.set_db_def_value(Register.POLLING_INTERVAL,
self.mb_timeout)
self.db.set_db_def_value(Register.HEARTBEAT_INTERVAL,
120) # fixme
self.new_data['controller'] = True
self.state = State.up
self._send_modbus_cmd(Modbus.READ_REGS, 0x3000, 48, logging.DEBUG)
self.mb_timer.start(self.mb_timeout)
def new_state_up(self):
if self.state is not State.up:
self.state = State.up
if (self.modbus_polling):
self.mb_timer.start(self.mb_start_timeout)
self.db.set_db_def_value(Register.POLLING_INTERVAL,
self.mb_timeout)
def __set_serial_no(self, snr: int):
serial_no = str(snr)
if self.unique_id == serial_no:
@@ -159,6 +200,7 @@ class SolarmanV5(Message):
found = True
self.node_id = inv['node_id']
self.sug_area = inv['suggested_area']
self.modbus_polling = inv['modbus_polling']
logger.debug(f'SerialNo {serial_no} allowed! area:{self.sug_area}') # noqa: E501
self.db.set_pv_module_details(inv)
@@ -175,50 +217,47 @@ class SolarmanV5(Message):
self.unique_id = serial_no
def read(self) -> float:
'''process all received messages in the _recv_buffer'''
self._read()
while True:
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if not self.header_valid:
self.__parse_header(self._recv_buffer, len(self._recv_buffer))
if self.header_valid and len(self._recv_buffer) >= \
(self.header_len + self.data_len+2):
log_lvl = self.log_lvl.get(self.control, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
hex_dump_memory(log_lvl, f'Received from {self.addr}:',
self._recv_buffer, self.header_len +
self.data_len+2)
if self.__trailer_is_ok(self._recv_buffer, self.header_len
+ self.data_len + 2):
if self.state == State.init:
self.state = State.received
if self.header_valid and len(self._recv_buffer) >= (self.header_len +
self.data_len+2):
log_lvl = self.log_lvl.get(self.control, logging.WARNING)
if callable(log_lvl):
log_lvl = log_lvl()
hex_dump_memory(log_lvl, f'Received from {self.addr}:',
self._recv_buffer, self.header_len+self.data_len+2)
if self.__trailer_is_ok(self._recv_buffer, self.header_len
+ self.data_len + 2):
if self.state == State.init:
self.state = State.received
self.__set_serial_no(self.snr)
self.__dispatch_msg()
self.__flush_recv_msg()
return 0 # wait 0s before sending a response
self.__set_serial_no(self.snr)
self.__dispatch_msg()
self.__flush_recv_msg()
else:
return 0 # wait 0s before sending a response
def forward(self, buffer, buflen) -> None:
'''add the actual receive msg to the forwarding queue'''
if self.no_forwarding:
return
tsun = Config.get('solarman')
if tsun['enabled']:
self._forward_buffer = buffer[:buflen]
self._forward_buffer += buffer[:buflen]
hex_dump_memory(logging.DEBUG, 'Store for forwarding:',
buffer, buflen)
self.__parse_header(self._forward_buffer,
len(self._forward_buffer))
fnc = self.switch.get(self.control, self.msg_unknown)
logger.info(self.__flow_str(self.server_side, 'forwrd') +
f' Ctl: {int(self.control):#04x}'
f' Msg: {fnc.__name__!r}')
return
def _init_new_client_conn(self) -> bool:
# self.__build_header(0x91)
# self._send_buffer += struct.pack(f'!{len(contact_name)+1}p'
# f'{len(contact_mail)+1}p',
# contact_name, contact_mail)
# self.__finish_send_msg()
return False
'''
@@ -270,7 +309,6 @@ class SolarmanV5(Message):
self._recv_buffer = bytearray()
return
self.header_valid = True
return
def __trailer_is_ok(self, buf: bytes, buf_len: int) -> bool:
crc = buf[self.data_len+11]
@@ -320,13 +358,17 @@ class SolarmanV5(Message):
'''update header for message before forwarding,
set sequence and checksum'''
_len = len(_forward_buffer)
struct.pack_into('<H', _forward_buffer, 1,
_len-13)
struct.pack_into('<H', _forward_buffer, 5,
self.seq.get_send())
ofs = 0
while ofs < _len:
result = struct.unpack_from('<BH', _forward_buffer, ofs)
data_len = result[1] # len of variable id string
check = sum(_forward_buffer[1:_len-2]) & 0xff
struct.pack_into('<B', _forward_buffer, _len-2, check)
struct.pack_into('<H', _forward_buffer, ofs+5,
self.seq.get_send())
check = sum(_forward_buffer[ofs+1:ofs+data_len+11]) & 0xff
struct.pack_into('<B', _forward_buffer, ofs+data_len+11, check)
ofs += (13 + data_len)
def __dispatch_msg(self) -> None:
fnc = self.switch.get(self.control, self.msg_unknown)
@@ -378,27 +420,27 @@ class SolarmanV5(Message):
self._send_modbus_cmd(func, addr, val, log_lvl)
def mb_timout_cb(self, exp_cnt):
self.mb_timer.start(self.MB_REGULAR_TIMEOUT)
self.mb_timer.start(self.mb_timeout)
self._send_modbus_cmd(Modbus.READ_REGS, 0x3008, 21, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x3000, 48, logging.DEBUG)
if 0 == (exp_cnt % 30):
if 1 == (exp_cnt % 30):
# logging.info("Regular Modbus Status request")
self._send_modbus_cmd(Modbus.READ_REGS, 0x2007, 2, logging.DEBUG)
self._send_modbus_cmd(Modbus.READ_REGS, 0x2000, 96, logging.DEBUG)
def at_cmd_forbidden(self, cmd: str, connection: str) -> bool:
return not cmd.startswith(tuple(self.at_acl[connection]['allow'])) or \
cmd.startswith(tuple(self.at_acl[connection]['block']))
async def send_at_cmd(self, AT_cmd: str) -> None:
async def send_at_cmd(self, at_cmd: str) -> None:
if self.state != State.up:
logger.warning(f'[{self.node_id}] ignore AT+ cmd,'
' as the state is not UP')
return
AT_cmd = AT_cmd.strip()
at_cmd = at_cmd.strip()
if self.at_cmd_forbidden(cmd=AT_cmd, connection='mqtt'):
data_json = f'\'{AT_cmd}\' is forbidden'
if self.at_cmd_forbidden(cmd=at_cmd, connection='mqtt'):
data_json = f'\'{at_cmd}\' is forbidden'
node_id = self.node_id
key = 'at_resp'
logger.info(f'{key}: {data_json}')
@@ -407,8 +449,8 @@ class SolarmanV5(Message):
self.forward_at_cmd_resp = False
self.__build_header(0x4510)
self._send_buffer += struct.pack(f'<BHLLL{len(AT_cmd)}sc', self.AT_CMD,
2, 0, 0, 0, AT_cmd.encode('utf-8'),
self._send_buffer += struct.pack(f'<BHLLL{len(at_cmd)}sc', self.AT_CMD,
2, 0, 0, 0, at_cmd.encode('utf-8'),
b'\r')
self.__finish_send_msg()
try:
@@ -421,21 +463,21 @@ class SolarmanV5(Message):
def __build_model_name(self):
db = self.db
MaxPow = db.get_db_value(Register.MAX_DESIGNED_POWER, 0)
Rated = db.get_db_value(Register.RATED_POWER, 0)
Model = None
if MaxPow == 2000:
if Rated == 800 or Rated == 600:
Model = f'TSOL-MS{MaxPow}({Rated})'
max_pow = db.get_db_value(Register.MAX_DESIGNED_POWER, 0)
rated = db.get_db_value(Register.RATED_POWER, 0)
model = None
if max_pow == 2000:
if rated == 800 or rated == 600:
model = f'TSOL-MS{max_pow}({rated})'
else:
Model = f'TSOL-MS{MaxPow}'
elif MaxPow == 1800 or MaxPow == 1600:
Model = f'TSOL-MS{MaxPow}'
if Model:
logger.info(f'Model: {Model}')
self.db.set_db_def_value(Register.EQUIPMENT_MODEL, Model)
model = f'TSOL-MS{max_pow}'
elif max_pow == 1800 or max_pow == 1600:
model = f'TSOL-MS{max_pow}'
if model:
logger.info(f'Model: {model}')
self.db.set_db_def_value(Register.EQUIPMENT_MODEL, model)
def __process_data(self, ftype):
def __process_data(self, ftype, ts):
inv_update = False
msg_type = self.control >> 8
for key, update in self.db.parse(self._recv_buffer, msg_type, ftype,
@@ -443,6 +485,7 @@ class SolarmanV5(Message):
if update:
if key == 'inverter':
inv_update = True
self._set_mqtt_timestamp(key, ts)
self.new_data[key] = True
if inv_update:
@@ -459,16 +502,18 @@ class SolarmanV5(Message):
data = self._recv_buffer[self.header_len:]
result = struct.unpack_from('<BLLL', data, 0)
ftype = result[0] # always 2
# total = result[1]
total = result[1]
tim = result[2]
res = result[3] # always zero
logger.info(f'frame type:{ftype:02x}'
f' timer:{tim:08x}s null:{res}')
# if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
self.__process_data(ftype)
if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
ts = total + self.time_ofs
else:
ts = None
self.__process_data(ftype, ts)
self.__forward_msg()
self.__send_ack_rsp(0x1110, ftype)
@@ -476,7 +521,7 @@ class SolarmanV5(Message):
data = self._recv_buffer
result = struct.unpack_from('<BHLLLHL', data, self.header_len)
ftype = result[0] # 1 or 0x81
# total = result[2]
total = result[2]
tim = result[3]
if 1 == ftype:
self.time_ofs = result[4]
@@ -484,16 +529,17 @@ class SolarmanV5(Message):
cnt = result[6]
logger.info(f'ftype:{ftype:02x} timer:{tim:08x}s'
f' ??: {unkn:04x} cnt:{cnt}')
# if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
if self.time_ofs:
# dt = datetime.fromtimestamp(total + self.time_ofs)
# logger.info(f'ts: {dt.strftime("%Y-%m-%d %H:%M:%S")}')
ts = total + self.time_ofs
else:
ts = None
self.__process_data(ftype)
self.__process_data(ftype, ts)
self.__forward_msg()
self.__send_ack_rsp(0x1210, ftype)
if self.state is not State.up:
self.state = State.up
self.mb_timer.start(self.MB_START_TIMEOUT)
self.new_state_up()
def msg_sync_start(self):
data = self._recv_buffer[self.header_len:]
@@ -514,17 +560,17 @@ class SolarmanV5(Message):
result = struct.unpack_from('<B', data, 0)
ftype = result[0]
if ftype == self.AT_CMD:
AT_cmd = data[15:].decode()
if self.at_cmd_forbidden(cmd=AT_cmd, connection='tsun'):
at_cmd = data[15:].decode()
if self.at_cmd_forbidden(cmd=at_cmd, connection='tsun'):
self.inc_counter('AT_Command_Blocked')
return
self.inc_counter('AT_Command')
self.forward_at_cmd_resp = True
elif ftype == self.MB_RTU_CMD:
if self.remoteStream.mb.recv_req(data[15:],
self.remoteStream.
__forward_msg):
if self.remote_stream.mb.recv_req(data[15:],
self.remote_stream.
__forward_msg):
self.inc_counter('Modbus_Command')
else:
logger.error('Invalid Modbus Msg')
@@ -533,7 +579,7 @@ class SolarmanV5(Message):
self.__forward_msg()
def publish_mqtt(self, key, data):
def publish_mqtt(self, key, data): # pragma: no cover
asyncio.ensure_future(
self.mqtt.publish(key, data))
@@ -543,9 +589,9 @@ class SolarmanV5(Message):
if self.forward_at_cmd_resp:
return logging.INFO
return logging.DEBUG
elif ftype == self.MB_RTU_CMD:
if self.server_side:
return self.mb.last_log_lvl
elif ftype == self.MB_RTU_CMD \
and self.server_side:
return self.mb.last_log_lvl
return logging.WARNING
@@ -576,6 +622,7 @@ class SolarmanV5(Message):
if update:
if key == 'inverter':
inv_update = True
self._set_mqtt_timestamp(key, self._timestamp())
self.new_data[key] = True
if inv_update:
@@ -590,9 +637,7 @@ class SolarmanV5(Message):
self.__forward_msg()
self.__send_ack_rsp(0x1710, ftype)
if self.state is not State.up:
self.state = State.up
self.mb_timer.start(self.MB_START_TIMEOUT)
self.new_state_up()
def msg_sync_end(self):
data = self._recv_buffer[self.header_len:]

View File

@@ -5,6 +5,11 @@ from enum import Enum
from typing import Generator
class ProxyMode(Enum):
SERVER = 1
CLIENT = 2
class Register(Enum):
COLLECTOR_FW_VERSION = 1
CHIP_TYPE = 2
@@ -18,6 +23,7 @@ class Register(Enum):
EQUIPMENT_MODEL = 24
NO_INPUTS = 25
MAX_DESIGNED_POWER = 26
OUTPUT_COEFFICIENT = 27
INVERTER_CNT = 50
UNKNOWN_SNR = 51
UNKNOWN_MSG = 52
@@ -89,6 +95,7 @@ class Register(Enum):
CONNECT_COUNT = 405
HEARTBEAT_INTERVAL = 406
IP_ADDRESS = 407
POLLING_INTERVAL = 408
EVENT_401 = 500
EVENT_402 = 501
EVENT_403 = 502
@@ -105,6 +112,9 @@ class Register(Enum):
EVENT_414 = 513
EVENT_415 = 514
EVENT_416 = 515
TS_INPUT = 600
TS_GRID = 601
TS_TOTAL = 602
VALUE_1 = 9000
TEST_REG1 = 10000
TEST_REG2 = 10001
@@ -120,16 +130,16 @@ class ClrAtMidnight:
return
prfx += f'{keys[0]}'
dict = cls.db
if prfx not in dict:
dict[prfx] = {}
dict = dict[prfx]
db_dict = cls.db
if prfx not in db_dict:
db_dict[prfx] = {}
db_dict = db_dict[prfx]
for key in keys[1:-1]:
if key not in dict:
dict[key] = {}
dict = dict[key]
dict[keys[-1]] = 0
if key not in db_dict:
db_dict[key] = {}
db_dict = db_dict[key]
db_dict[keys[-1]] = 0
@classmethod
def elm(cls) -> Generator[tuple[str, dict], None, None]:
@@ -177,15 +187,29 @@ class Infos:
}
__comm_type_val_tpl = "{%set com_types = ['n/a','Wi-Fi', 'G4', 'G5', 'GPRS'] %}{{com_types[value_json['Communication_Type']|int(0)]|default(value_json['Communication_Type'])}}" # noqa: E501
__status_type_val_tpl = "{%set inv_status = ['n/a', 'Online', 'Offline'] %}{{inv_status[value_json['Inverter_Status']|int(0)]|default(value_json['Inverter_Status'])}}" # noqa: E501
__status_type_val_tpl = "{%set inv_status = ['Off-line', 'On-grid', 'Off-grid'] %}{{inv_status[value_json['Inverter_Status']|int(0)]|default(value_json['Inverter_Status'])}}" # noqa: E501
__rated_power_val_tpl = "{% if 'Rated_Power' in value_json and value_json['Rated_Power'] != None %}{{value_json['Rated_Power']|string() +' W'}}{% else %}{{ this.state }}{% endif %}" # noqa: E501
__designed_power_val_tpl = '''
{% if 'Max_Designed_Power' in value_json and
value_json['Max_Designed_Power'] != None %}
{% if value_json['Max_Designed_Power'] | int(0xffff) < 0x8000 %}
{{value_json['Max_Designed_Power']|string() +' W'}}
{% else %}
n/a
{% endif %}
{% else %}
{{ this.state }}
{% endif %}
'''
__output_coef_val_tpl = "{% if 'Output_Coefficient' in value_json and value_json['Output_Coefficient'] != None %}{{value_json['Output_Coefficient']|string() +' %'}}{% else %}{{ this.state }}{% endif %}" # noqa: E501
__info_defs = {
# collector values used for device registration:
Register.COLLECTOR_FW_VERSION: {'name': ['collector', 'Collector_Fw_Version'], 'level': logging.INFO, 'unit': ''}, # noqa: E501
Register.CHIP_TYPE: {'name': ['collector', 'Chip_Type'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.CHIP_MODEL: {'name': ['collector', 'Chip_Model'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.TRACE_URL: {'name': ['collector', 'Trace_URL'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.LOGGER_URL: {'name': ['collector', 'Logger_URL'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.CHIP_TYPE: {'name': ['collector', 'Chip_Type'], 'singleton': False, 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.CHIP_MODEL: {'name': ['collector', 'Chip_Model'], 'singleton': False, 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.TRACE_URL: {'name': ['collector', 'Trace_URL'], 'singleton': False, 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.LOGGER_URL: {'name': ['collector', 'Logger_URL'], 'singleton': False, 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
# inverter values used for device registration:
Register.PRODUCT_NAME: {'name': ['inverter', 'Product_Name'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
@@ -194,9 +218,9 @@ class Infos:
Register.SERIAL_NUMBER: {'name': ['inverter', 'Serial_Number'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.EQUIPMENT_MODEL: {'name': ['inverter', 'Equipment_Model'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.NO_INPUTS: {'name': ['inverter', 'No_Inputs'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.MAX_DESIGNED_POWER: {'name': ['inverter', 'Max_Designed_Power'], 'level': logging.INFO, 'unit': 'W', 'ha': {'dev': 'inverter', 'dev_cla': None, 'stat_cla': None, 'id': 'designed_power_', 'fmt': '| string + " W"', 'name': 'Max Designed Power', 'icon': 'mdi:lightning-bolt', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.RATED_POWER: {'name': ['inverter', 'Rated_Power'], 'level': logging.DEBUG, 'unit': 'W', 'ha': {'dev': 'inverter', 'dev_cla': None, 'stat_cla': None, 'id': 'rated_power_', 'fmt': '| string + " W"', 'name': 'Rated Power', 'icon': 'mdi:lightning-bolt', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.MAX_DESIGNED_POWER: {'name': ['inverter', 'Max_Designed_Power'], 'level': logging.INFO, 'unit': 'W', 'ha': {'dev': 'inverter', 'dev_cla': None, 'stat_cla': None, 'id': 'designed_power_', 'val_tpl': __designed_power_val_tpl, 'name': 'Max Designed Power', 'icon': 'mdi:lightning-bolt', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.RATED_POWER: {'name': ['inverter', 'Rated_Power'], 'level': logging.DEBUG, 'unit': 'W', 'ha': {'dev': 'inverter', 'dev_cla': None, 'stat_cla': None, 'id': 'rated_power_', 'val_tpl': __rated_power_val_tpl, 'name': 'Rated Power', 'icon': 'mdi:lightning-bolt', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.OUTPUT_COEFFICIENT: {'name': ['inverter', 'Output_Coefficient'], 'level': logging.INFO, 'unit': '%', 'ha': {'dev': 'inverter', 'dev_cla': None, 'stat_cla': None, 'id': 'output_coef_', 'val_tpl': __output_coef_val_tpl, 'name': 'Output Coefficient', 'icon': 'mdi:lightning-bolt', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.PV1_MANUFACTURER: {'name': ['inverter', 'PV1_Manufacturer'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.PV1_MODEL: {'name': ['inverter', 'PV1_Model'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
Register.PV2_MANUFACTURER: {'name': ['inverter', 'PV2_Manufacturer'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
@@ -244,14 +268,16 @@ class Infos:
Register.EVENT_416: {'name': ['events', '416_'], 'level': logging.DEBUG, 'unit': ''}, # noqa: E501
# grid measures:
Register.TS_GRID: {'name': ['grid', 'Timestamp'], 'level': logging.INFO, 'unit': ''}, # noqa: E501
Register.GRID_VOLTAGE: {'name': ['grid', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V', 'ha': {'dev': 'inverter', 'dev_cla': 'voltage', 'stat_cla': 'measurement', 'id': 'out_volt_', 'fmt': '| float', 'name': 'Grid Voltage', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.GRID_CURRENT: {'name': ['grid', 'Current'], 'level': logging.DEBUG, 'unit': 'A', 'ha': {'dev': 'inverter', 'dev_cla': 'current', 'stat_cla': 'measurement', 'id': 'out_cur_', 'fmt': '| float', 'name': 'Grid Current', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.GRID_FREQUENCY: {'name': ['grid', 'Frequency'], 'level': logging.DEBUG, 'unit': 'Hz', 'ha': {'dev': 'inverter', 'dev_cla': 'frequency', 'stat_cla': 'measurement', 'id': 'out_freq_', 'fmt': '| float', 'name': 'Grid Frequency', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.OUTPUT_POWER: {'name': ['grid', 'Output_Power'], 'level': logging.INFO, 'unit': 'W', 'ha': {'dev': 'inverter', 'dev_cla': 'power', 'stat_cla': 'measurement', 'id': 'out_power_', 'fmt': '| float', 'name': 'Power'}}, # noqa: E501
Register.INVERTER_TEMP: {'name': ['env', 'Inverter_Temp'], 'level': logging.DEBUG, 'unit': '°C', 'ha': {'dev': 'inverter', 'dev_cla': 'temperature', 'stat_cla': 'measurement', 'id': 'temp_', 'fmt': '| int', 'name': 'Temperature'}}, # noqa: E501
Register.INVERTER_STATUS: {'name': ['env', 'Inverter_Status'], 'level': logging.INFO, 'unit': '', 'ha': {'dev': 'inverter', 'comp': 'sensor', 'dev_cla': None, 'stat_cla': None, 'id': 'inv_status_', 'name': 'Inverter Status', 'val_tpl': __status_type_val_tpl, 'icon': 'mdi:counter'}}, # noqa: E501
Register.INVERTER_STATUS: {'name': ['env', 'Inverter_Status'], 'level': logging.INFO, 'unit': '', 'ha': {'dev': 'inverter', 'comp': 'sensor', 'dev_cla': None, 'stat_cla': None, 'id': 'inv_status_', 'name': 'Inverter Status', 'val_tpl': __status_type_val_tpl, 'icon': 'mdi:power'}}, # noqa: E501
# input measures:
Register.TS_INPUT: {'name': ['input', 'Timestamp'], 'level': logging.INFO, 'unit': ''}, # noqa: E501
Register.PV1_VOLTAGE: {'name': ['input', 'pv1', 'Voltage'], 'level': logging.DEBUG, 'unit': 'V', 'ha': {'dev': 'input_pv1', 'dev_cla': 'voltage', 'stat_cla': 'measurement', 'id': 'volt_pv1_', 'val_tpl': "{{ (value_json['pv1']['Voltage'] | float)}}", 'icon': 'mdi:gauge', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.PV1_CURRENT: {'name': ['input', 'pv1', 'Current'], 'level': logging.DEBUG, 'unit': 'A', 'ha': {'dev': 'input_pv1', 'dev_cla': 'current', 'stat_cla': 'measurement', 'id': 'cur_pv1_', 'val_tpl': "{{ (value_json['pv1']['Current'] | float)}}", 'icon': 'mdi:gauge', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.PV1_POWER: {'name': ['input', 'pv1', 'Power'], 'level': logging.DEBUG, 'unit': 'W', 'ha': {'dev': 'input_pv1', 'dev_cla': 'power', 'stat_cla': 'measurement', 'id': 'power_pv1_', 'val_tpl': "{{ (value_json['pv1']['Power'] | float)}}"}}, # noqa: E501
@@ -283,6 +309,7 @@ class Infos:
Register.PV6_DAILY_GENERATION: {'name': ['input', 'pv6', 'Daily_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha': {'dev': 'input_pv6', 'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id': 'daily_gen_pv6_', 'name': 'Daily Generation', 'val_tpl': "{{ (value_json['pv6']['Daily_Generation'] | float)}}", 'icon': 'mdi:solar-power-variant', 'must_incr': True}}, # noqa: E501
Register.PV6_TOTAL_GENERATION: {'name': ['input', 'pv6', 'Total_Generation'], 'level': logging.DEBUG, 'unit': 'kWh', 'ha': {'dev': 'input_pv6', 'dev_cla': 'energy', 'stat_cla': 'total', 'id': 'total_gen_pv6_', 'name': 'Total Generation', 'val_tpl': "{{ (value_json['pv6']['Total_Generation'] | float)}}", 'icon': 'mdi:solar-power', 'must_incr': True}}, # noqa: E501
# total:
Register.TS_TOTAL: {'name': ['total', 'Timestamp'], 'level': logging.INFO, 'unit': ''}, # noqa: E501
Register.DAILY_GENERATION: {'name': ['total', 'Daily_Generation'], 'level': logging.INFO, 'unit': 'kWh', 'ha': {'dev': 'inverter', 'dev_cla': 'energy', 'stat_cla': 'total_increasing', 'id': 'daily_gen_', 'fmt': '| float', 'name': 'Daily Generation', 'icon': 'mdi:solar-power-variant', 'must_incr': True}}, # noqa: E501
Register.TOTAL_GENERATION: {'name': ['total', 'Total_Generation'], 'level': logging.INFO, 'unit': 'kWh', 'ha': {'dev': 'inverter', 'dev_cla': 'energy', 'stat_cla': 'total', 'id': 'total_gen_', 'fmt': '| float', 'name': 'Total Generation', 'icon': 'mdi:solar-power', 'must_incr': True}}, # noqa: E501
@@ -295,6 +322,7 @@ class Infos:
Register.DATA_UP_INTERVAL: {'name': ['controller', 'Data_Up_Interval'], 'level': logging.DEBUG, 'unit': 's', 'ha': {'dev': 'controller', 'dev_cla': None, 'stat_cla': None, 'id': 'data_up_intval_', 'fmt': '| string + " s"', 'name': 'Data Up Interval', 'icon': 'mdi:update', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.HEARTBEAT_INTERVAL: {'name': ['controller', 'Heartbeat_Interval'], 'level': logging.DEBUG, 'unit': 's', 'ha': {'dev': 'controller', 'dev_cla': None, 'stat_cla': None, 'id': 'heartbeat_intval_', 'fmt': '| string + " s"', 'name': 'Heartbeat Interval', 'icon': 'mdi:update', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.IP_ADDRESS: {'name': ['controller', 'IP_Address'], 'level': logging.DEBUG, 'unit': '', 'ha': {'dev': 'controller', 'dev_cla': None, 'stat_cla': None, 'id': 'ip_address_', 'fmt': '| string', 'name': 'IP Address', 'icon': 'mdi:wifi', 'ent_cat': 'diagnostic'}}, # noqa: E501
Register.POLLING_INTERVAL: {'name': ['controller', 'Polling_Interval'], 'level': logging.DEBUG, 'unit': 's', 'ha': {'dev': 'controller', 'dev_cla': None, 'stat_cla': None, 'id': 'polling_intval_', 'fmt': '| string + " s"', 'name': 'Polling Interval', 'icon': 'mdi:update', 'ent_cat': 'diagnostic'}}, # noqa: E501
}
@property
@@ -305,7 +333,7 @@ class Infos:
def info_defs(self) -> dict:
return self.__info_defs
def dev_value(self, idx: str | int) -> str | int | float | None:
def dev_value(self, idx: str | int) -> str | int | float | dict | None:
'''returns the stored device value from our database
idx:int ==> lookup the value in the database and return it as str,
@@ -318,29 +346,29 @@ class Infos:
elif idx in self.info_defs:
row = self.info_defs[idx]
if 'singleton' in row and row['singleton']:
dict = self.stat
db_dict = self.stat
else:
dict = self.db
db_dict = self.db
keys = row['name']
for key in keys:
if key not in dict:
if key not in db_dict:
return None # value not found in the database
dict = dict[key]
return dict # value of the reqeusted entry
db_dict = db_dict[key]
return db_dict # value of the reqeusted entry
return None # unknwon idx, not in info_defs
def inc_counter(self, counter: str) -> None:
'''inc proxy statistic counter'''
dict = self.stat['proxy']
dict[counter] += 1
db_dict = self.stat['proxy']
db_dict[counter] += 1
def dec_counter(self, counter: str) -> None:
'''dec proxy statistic counter'''
dict = self.stat['proxy']
dict[counter] -= 1
db_dict = self.stat['proxy']
db_dict[counter] -= 1
def ha_proxy_confs(self, ha_prfx: str, node_id: str, snr: str) \
-> Generator[tuple[str, str, str, str], None, None]:
@@ -363,6 +391,20 @@ class Infos:
def ha_conf(self, key, ha_prfx, node_id, snr, singleton: bool,
sug_area: str = '') -> tuple[str, str, str, str] | None:
'''Method to build json register struct for home-assistant
auto configuration and the unique entity string, for all proxy
registers
arguments:
key ==> index of info_defs dict which reference the topic
ha_prfx:str ==> MQTT prefix for the home assistant 'stat_t string
node_id:str ==> node id of the inverter, used to build unique entity
snr:str ==> serial number of the inverter, used to build unique
entity strings
singleton ==> bool to allow/disaalow proxy topics which are common
for all invters
sug_area ==> area name for home assistant
'''
if key not in self.info_defs:
return None
row = self.info_defs[key]
@@ -466,7 +508,40 @@ class Infos:
return json.dumps(attr), component, node_id, attr['uniq_id']
return None
def _key_obj(self, id: Register) -> list:
def ha_remove(self, key, node_id, snr) -> tuple[str, str, str, str] | None:
'''Method to build json unregister struct for home-assistant
to remove topics per auto configuration. Only for inverer topics.
arguments:
key ==> index of info_defs dict which reference the topic
node_id:str ==> node id of the inverter, used to build unique entity
snr:str ==> serial number of the inverter, used to build unique
entity strings
hint:
the returned tuple must have the same format as self.ha_conf()
'''
if key not in self.info_defs:
return None
row = self.info_defs[key]
if 'singleton' in row and row['singleton']:
return None
# check if we have details for home assistant
if 'ha' in row:
ha = row['ha']
if 'comp' in ha:
component = ha['comp']
else:
component = 'sensor'
attr = {}
uniq_id = ha['id']+snr
return json.dumps(attr), component, node_id, uniq_id
return None
def _key_obj(self, id: Register) -> tuple:
d = self.info_defs.get(id, {'name': None, 'level': logging.DEBUG,
'unit': ''})
if 'ha' in d and 'must_incr' in d['ha']:
@@ -478,21 +553,21 @@ class Infos:
def update_db(self, keys: list, must_incr: bool, result):
name = ''
dict = self.db
db_dict = self.db
for key in keys[:-1]:
if key not in dict:
dict[key] = {}
dict = dict[key]
if key not in db_dict:
db_dict[key] = {}
db_dict = db_dict[key]
name += key + '.'
if keys[-1] not in dict:
if keys[-1] not in db_dict:
update = (not must_incr or result > 0)
else:
if must_incr:
update = dict[keys[-1]] < result
update = db_dict[keys[-1]] < result
else:
update = dict[keys[-1]] != result
update = db_dict[keys[-1]] != result
if update:
dict[keys[-1]] = result
db_dict[keys[-1]] = result
name += keys[-1]
return name, update
@@ -546,13 +621,13 @@ class Infos:
return True
if 'gte' in dep:
return not value >= dep['gte']
return value < dep['gte']
elif 'less_eq' in dep:
return not value <= dep['less_eq']
return value > dep['less_eq']
return True
def set_pv_module_details(self, inv: dict) -> None:
map = {'pv1': {'manufacturer': Register.PV1_MANUFACTURER, 'model': Register.PV1_MODEL}, # noqa: E501
pvs = {'pv1': {'manufacturer': Register.PV1_MANUFACTURER, 'model': Register.PV1_MODEL}, # noqa: E501
'pv2': {'manufacturer': Register.PV2_MANUFACTURER, 'model': Register.PV2_MODEL}, # noqa: E501
'pv3': {'manufacturer': Register.PV3_MANUFACTURER, 'model': Register.PV3_MODEL}, # noqa: E501
'pv4': {'manufacturer': Register.PV4_MANUFACTURER, 'model': Register.PV4_MODEL}, # noqa: E501
@@ -560,7 +635,7 @@ class Infos:
'pv6': {'manufacturer': Register.PV6_MANUFACTURER, 'model': Register.PV6_MODEL} # noqa: E501
}
for key, reg in map.items():
for key, reg in pvs.items():
if key in inv:
if 'manufacturer' in inv[key]:
self.set_db_def_value(reg['manufacturer'],

View File

@@ -5,16 +5,16 @@ from enum import Enum
if __name__ == "app.src.messages":
from app.src.infos import Infos
from app.src.infos import Infos, Register
from app.src.modbus import Modbus
else: # pragma: no cover
from infos import Infos
from infos import Infos, Register
from modbus import Modbus
logger = logging.getLogger('msg')
def hex_dump_memory(level, info, data, num):
def hex_dump_memory(level, info, data, data_len):
n = 0
lines = []
lines.append(info)
@@ -22,20 +22,20 @@ def hex_dump_memory(level, info, data, num):
if not tracer.isEnabledFor(level):
return
for i in range(0, num, 16):
for i in range(0, data_len, 16):
line = ' '
line += '%04x | ' % (i)
n += 16
for j in range(n-16, n):
if j >= len(data):
if j >= data_len:
break
line += '%02x ' % abs(data[j])
line += ' ' * (3 * 16 + 9 - len(line)) + ' | '
for j in range(n-16, n):
if j >= len(data):
if j >= data_len:
break
c = data[j] if not (data[j] < 0x20 or data[j] > 0x7e) else '.'
line += '%c' % c
@@ -91,6 +91,7 @@ class Message(metaclass=IterRegistry):
self._forward_buffer = bytearray(0)
self.new_data = {}
self.state = State.init
self.shutdown_started = False
'''
Empty methods, that have to be implemented in any child class which
@@ -102,7 +103,22 @@ class Message(metaclass=IterRegistry):
def _update_header(self, _forward_buffer):
'''callback for updating the header of the forward buffer'''
return # pragma: no cover
pass # pragma: no cover
def _set_mqtt_timestamp(self, key, ts: float | None):
if key not in self.new_data or \
not self.new_data[key]:
if key == 'grid':
info_id = Register.TS_GRID
elif key == 'input':
info_id = Register.TS_INPUT
elif key == 'total':
info_id = Register.TS_TOTAL
else:
return
# tstr = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(ts))
# logger.info(f'update: key: {key} ts:{tstr}'
self.db.set_db_def_value(info_id, round(ts))
'''
Our puplic methods
@@ -111,7 +127,7 @@ class Message(metaclass=IterRegistry):
if self.mb:
self.mb.close()
self.mb = None
pass # pragma: no cover
# pragma: no cover
def inc_counter(self, counter: str) -> None:
self.db.inc_counter(counter)

View File

@@ -41,8 +41,10 @@ class Modbus():
__crc_tab = []
map = {
0x2007: {'reg': Register.MAX_DESIGNED_POWER, 'fmt': '!H', 'ratio': 1}, # noqa: E501
# 0x????: {'reg': Register.INVERTER_STATUS, 'fmt': '!H'}, # noqa: E501
0x3008: {'reg': Register.VERSION, 'fmt': '!H', 'eval': "f'V{(result>>12)}.{(result>>8)&0xf}.{(result>>4)&0xf}{result&0xf}'"}, # noqa: E501
0x202c: {'reg': Register.OUTPUT_COEFFICIENT, 'fmt': '!H', 'ratio': 100/1024}, # noqa: E501
0x3000: {'reg': Register.INVERTER_STATUS, 'fmt': '!H'}, # noqa: E501
0x3008: {'reg': Register.VERSION, 'fmt': '!H', 'eval': "f'V{(result>>12)}.{(result>>8)&0xf}.{(result>>4)&0xf}{result&0xf:1X}'"}, # noqa: E501
0x3009: {'reg': Register.GRID_VOLTAGE, 'fmt': '!H', 'ratio': 0.1}, # noqa: E501
0x300a: {'reg': Register.GRID_CURRENT, 'fmt': '!H', 'ratio': 0.01}, # noqa: E501
0x300b: {'reg': Register.GRID_FREQUENCY, 'fmt': '!H', 'ratio': 0.01}, # noqa: E501
@@ -104,6 +106,7 @@ class Modbus():
self.loop = asyncio.get_event_loop()
self.req_pend = False
self.tim = None
self.node_id = ''
def close(self):
"""free the queue and erase the callback handlers"""
@@ -111,7 +114,7 @@ class Modbus():
self.__stop_timer()
self.rsp_handler = None
self.snd_handler = None
while not self.que.empty:
while not self.que.empty():
self.que.get_nowait()
def __del__(self):
@@ -178,6 +181,8 @@ class Modbus():
5: No MODBUS request pending
"""
# logging.info(f'recv_resp: first byte modbus:{buf[0]} len:{len(buf)}')
self.node_id = node_id
if not self.req_pend:
self.err = 5
return
@@ -265,7 +270,10 @@ class Modbus():
self.__start_timer()
self.snd_handler(self.last_req, self.last_log_lvl, state='Retrans')
else:
logger.info(f'Modbus timeout {self}')
logger.info(f'[{self.node_id}] Modbus timeout '
f'(FCode: {self.last_fcode} '
f'Reg: 0x{self.last_reg:04x}, '
f'{self.last_len})')
self.counter['timeouts'] += 1
self.__send_next_from_que()

76
app/src/modbus_tcp.py Normal file
View File

@@ -0,0 +1,76 @@
import logging
import traceback
import asyncio
from config import Config
# import gc
from gen3plus.inverter_g3p import InverterG3P
logger = logging.getLogger('conn')
class ModbusConn():
def __init__(self, host, port):
self.host = host
self.port = port
self.addr = (host, port)
self.stream = None
async def __aenter__(self) -> 'InverterG3P':
'''Establish a client connection to the TSUN cloud'''
connection = asyncio.open_connection(self.host, self.port)
reader, writer = await connection
self.stream = InverterG3P(reader, writer, self.addr,
client_mode=True)
logging.info(f'[{self.stream.node_id}:{self.stream.conn_no}] '
f'Connected to {self.addr}')
self.stream.inc_counter('Inverter_Cnt')
await self.stream.publish_outstanding_mqtt()
return self.stream
async def __aexit__(self, exc_type, exc, tb):
self.stream.dec_counter('Inverter_Cnt')
await self.stream.publish_outstanding_mqtt()
class ModbusTcp():
def __init__(self, loop) -> None:
inverters = Config.get('inverters')
# logging.info(f'Inverters: {inverters}')
for inv in inverters.values():
if (type(inv) is dict
and 'monitor_sn' in inv
and 'client_mode' in inv):
client = inv['client_mode']
# logging.info(f"SerialNo:{inv['monitor_sn']} host:{client['host']} port:{client['port']}") # noqa: E501
loop.create_task(self.modbus_loop(client['host'],
client['port'],
inv['monitor_sn']))
async def modbus_loop(self, host, port, snr: int) -> None:
'''Loop for receiving messages from the TSUN cloud (client-side)'''
while True:
try:
async with ModbusConn(host, port) as stream:
await stream.send_start_cmd(snr, host)
await stream.loop()
logger.info(f'[{stream.node_id}:{stream.conn_no}] '
f'Connection closed - Shutdown: '
f'{stream.shutdown_started}')
if stream.shutdown_started:
return
except (ConnectionRefusedError, TimeoutError) as error:
logging.debug(f'Inv-conn:{error}')
except OSError as error:
logging.info(f'os-error: {error}')
except Exception:
logging.error(
f"ModbusTcpCreate: Exception for {(host,port)}:\n"
f"{traceback.format_exc()}")
await asyncio.sleep(10)

View File

@@ -38,7 +38,8 @@ class Mqtt(metaclass=Singleton):
self.task.cancel()
try:
await self.task
except Exception as e:
except (asyncio.CancelledError, Exception) as e:
logging.debug(f"Mqtt.close: exception: {e} ...")
async def publish(self, topic: str, payload: str | bytes | bytearray
@@ -60,6 +61,7 @@ class Mqtt(metaclass=Singleton):
interval = 5 # Seconds
ha_status_topic = f"{ha['auto_conf_prefix']}/status"
mb_rated_topic = "tsun/+/rated_load" # fixme
mb_out_coeff_topic = "tsun/+/out_coeff" # fixme
mb_reads_topic = "tsun/+/modbus_read_regs" # fixme
mb_inputs_topic = "tsun/+/modbus_read_inputs" # fixme
mb_at_cmd_topic = "tsun/+/at_cmd" # fixme
@@ -75,6 +77,7 @@ class Mqtt(metaclass=Singleton):
# async with self.__client.messages() as messages:
await self.__client.subscribe(ha_status_topic)
await self.__client.subscribe(mb_rated_topic)
await self.__client.subscribe(mb_out_coeff_topic)
await self.__client.subscribe(mb_reads_topic)
await self.__client.subscribe(mb_inputs_topic)
await self.__client.subscribe(mb_at_cmd_topic)
@@ -93,6 +96,19 @@ class Mqtt(metaclass=Singleton):
Modbus.WRITE_SINGLE_REG,
1, 0x2008)
if message.topic.matches(mb_out_coeff_topic):
payload = message.payload.decode("UTF-8")
val = round(float(payload) * 1024/100)
if val < 0 or val > 1024:
logger_mqtt.error('out_coeff: value must be in'
'the range 0..100,'
f' got: {payload}')
else:
await self.modbus_cmd(message,
Modbus.WRITE_SINGLE_REG,
0, 0x202c, val)
if message.topic.matches(mb_reads_topic):
await self.modbus_cmd(message,
Modbus.READ_REGS, 2)
@@ -154,7 +170,7 @@ class Mqtt(metaclass=Singleton):
logger_mqtt.debug(f'Found: {node_id}')
fnc = getattr(m, "send_modbus_cmd", None)
res = payload.split(',')
if params != len(res):
if params > 0 and params != len(res):
logger_mqtt.error(f'Parameter expected: {params}, '
f'got: {len(res)}')
return

View File

@@ -11,6 +11,7 @@ from gen3.inverter_g3 import InverterG3
from gen3plus.inverter_g3p import InverterG3P
from scheduler import Schedule
from config import Config
from modbus_tcp import ModbusTcp
routes = web.RouteTableDef()
proxy_is_up = False
@@ -94,6 +95,7 @@ async def handle_shutdown(web_task):
# first, disc all open TCP connections gracefully
#
for stream in Message:
stream.shutdown_started = True
try:
await asyncio.wait_for(stream.disc(), 2)
except Exception:
@@ -115,6 +117,13 @@ async def handle_shutdown(web_task):
web_task.cancel()
await web_task
#
# now cancel all remaining (pending) tasks
#
pending = asyncio.all_tasks()
for task in pending:
task.cancel()
#
# at last, start a coro for stopping the loop
#
@@ -164,6 +173,7 @@ if __name__ == "__main__":
logging.info(f'ConfigErr: {ConfigErr}')
Inverter.class_init()
Schedule.start()
mb_tcp = ModbusTcp(loop)
#
# Create tasks for our listening servers. These must be tasks! If we call