The code before this change could potentially remove a volume which
should not be removed if a container was deleted before the call that
references said volume.
To avoid this, we additionally filter the list of volumes to cleanup by
any that are referenced in the target state. This means that cleanup
will never remove it, as long as it's still supposed to be there,
regardless of if a container references it or not.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
This change also makes sure that in the application-manager workflow we
pass around instances of the Volume class, rather than just the config.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
Since we were comparing the VPN's value before adding the explicit "true", there were cases
were the VPN is off, and therefore "value" didn't match the default, so the supervisor would
create a device specific SUPERVISOR_VPN_CONTROL = true, which is unnecessary and causes issues if
users don't expect this and move the device to an app that has VPN disabled. The correct behavior
is to compare "varValue" and only create a device config var if this value differs from the default.
(This was the behavior before the TS conversion in 01ed7bb103b4df8fb0679cf858220db42d4a0b92 )
Change-type: patch
Signed-off-by: Pablo Carranza Velez <pablo@balena.io>
This change makes DeviceState to wait until local mode switch is definitely
completed before actually applying the state, which avoids races in state cleanup.
Change-type: patch
Signed-off-by: Roman Mazur <roman@balena.io>
In local mode, we now update device status on the backend,
but omit applications info in our updates.
Closes: #959
Change-type: minor
Signed-off-by: Roman Mazur <roman@balena.io>
Also use the supervisor's own container logging monitoring code when
running livepush on the supervisor container.
Change-type: minor
Signed-off-by: Cameron Diver <cameron@balena.io>
Changes are collected together and exist in memory, for querying and
saving. Once every 10 mins, every changed timestamp is flushed to the
database.
Change-type: patch
Closes: #987
Signed-off-by: Cameron Diver <cameron@balena.io>
This is a massive commit, but nothing related to runtime has actually
changed, only the lint errors have changed.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
Changes in the node engine related to streams would cause the gzip
streams flush function to be called at the wrong times. The sinon fake
timers were also interacting with this.
We use setImmediate to call the flush function, and remove sinon timers
for the logging tests.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
Before this change the first time the cleanup code runs would be before
the migrations have had a chance to execute. This change makes it so
that the cleanup code always runs once the migrations have finished.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
When assigning multiple host ports to a single container port before
this change, the supervisor would incorrectly take only the first host
port into consideration. This change makes it so that every host port
per container port is considered.
Closes: #986
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
Prior to this change, we would `_.uniq` the expose value before adding
values from the port mappings. This could cause ports to get added
twice, which would cause the supervisor to think that there is a
configuration mismatch.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
Even though this would never have attempted to report the state to the
api during local mode, it leaves behind artifacts which would cause the
state to be sometimes reported when exiting local mode. This would cause
the api to reject the update unecessarily.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
This had a bug where it was using the `in` operator on a list. It may
have worked for some cases, but would have failed for others.
Change-type: patch
Signed-off-by: Cameron Diver <cameron@balena.io>
We add a database table, which holds information about the last
timestamp of a log successfully reported to a backend (local or remote).
We then use this value to calculate from which point in time to start
reporting logs from the container. If this is the first time we've seen
a container, we get all logs, and for every log reported we save the
timestamp. If it is not the first time we've seen a container, we
request all logs since the last reported time, ensuring no interruption
of service.
Change-type: minor
Closes: #937
Signed-off-by: Cameron Diver <cameron@balena.io>
Container logging is now handled by a class which attaches and emits
information from the container. We add these to the directory
logging-backends/, and rename it to logging/.
Change-type: minor
Signed-off-by: Cameron Diver <cameron@balena.io>