Depending the channel-configuration, this would send three LinkADRReq
mac-commands while the same configuration could be send to the device in
two LinkADRReq mac-commands.
The env. variables are used such that the same configuration can be used
for Docker images, .deb and .rpm packages. However, if installing .deb
or .rpm packages, this can be confusing as executing 'chirpstack -c
/etc/chirpstack' will fail because these environment variables are only
available when using systemctl to start the ChirpStack process
(configured in the .service configuration file).
In case of mult-server deployments, this can be confusing as each VM
generates different certificate files by default, where all instances
must share the same certificate (or at least CA certificate + key).
The other issue is that the MQTT broker certificate must contain the
correct hostname, which can (most of the times) not automatically be
retrieved. Documentation to generate these certificates can be found
here:
https://www.chirpstack.io/docs/guides/mosquitto-tls-configuration.html
In some scenarios, this check returned true while the avx2 extension was
not available (e.g. QEMU emulated CPU).
See for more details: https://github.com/RustCrypto/hashes/pull/386.
The first returns the Protobuf integer, the second the generated enum
type. The big difference is that the first .to_string() would convert
the Protobuf integer to the same but then as String type, where the
latter returns something like '1.0.3' which is provided by the
fmt::Display trait implementation by ChirpStack.
Please note that for LoRaWAN 1.1.x, mac-commands in the f_opts field are
encrypted. Within the context of the device we can decrypt these, but
within the context of a gateway we can only show these as raw bytes.
This changes the clean_session default to false, as only in case of a
persistent session, qos > 0 would be effective. If the client_id is not
set, then ChirpStack will generate a random client_id, which stays the
same during the lifetime of the chirpstack process.
This also implements a subscribe loop, as the client re-connect feature
does not re-subscribe. Even in case of a persistent session there is no
guarantee that the subscription is recovered, as it might have been a
MQTT broker restart. In case the broker stores the sessions in-memory,
the client would re-connect, but without subscriptions.
The (re)subscribe logic is placed outside the on-connected callback, as
the callback function must not block, thus can not wait for the
subscribe result. No the (re)subscribe happens async from the
on-connected.