This label can be used by user services to indicate that a reboot is
required after the install of a service in order to fully apply an update.
Change-type: minor
This was on device-config before, but we'll need to set the reboot
breadcrumb from the application-manager as well when we introduce
`requires-reboot` as a label.
Change-type: patch
Move the device-config module to the device-state folder and export only
those functions that are needed elsewhere in the codebase
This moves us closer to making the device-state module the only way to
modify application and configuration.
Change-type: patch
This fixes a regression where dependencies would only be started in
order and would start the dependent service if its dependency had been
started at some point in the past, regardless of the running condition.
This makes the behavior more consistent with docker compose where the
[dependency needs to be
running or healthy](69a83d1303/pkg/compose/convergence.go (L441)) for the service to be started.
Change-type: patch
This config backend uses ConfigJsonConfigBackend to update
os.power and os.fan subfields under the "os" key, in order
to set power and fan configs. The expected format for os.power
and os.fan settings is:
```
{
os: {
power: {
mode: string
},
fan: {
profile: string
}
}
}
```
There may be other keys in os which are not managed by the Supervisor,
so PowerFanConfig backend doesn't read or write to them. Extra keys in os.power
and os.fan are ignored when getting boot config and removed when setting
boot config.
After this backend writes to config.json, host services os-power-mode
and os-fan-profile pick up the changes, on reboot in the former's case
and at runtime in the latter's case. The changes are applied by the host
services, which the Supervisor does not manage aside from streaming
their service logs to the dashboard.
Change-type: minor
Signed-off-by: Christina Ying Wang <christina@balena.io>
Also deprecate path-getting method, and remove OS version check.
The OS version itself is not used in ConfigJsonConfigBackend, so
it seems the OS version check is to confirm the existence of config.json
during class init, because OS version is a field that's always there
in a valid config.json.
Signed-off-by: Christina Ying Wang <christina@balena.io>
This will catch any container or host logs between Supervisor runs. If
FinishedAt is invalid (0), the last sent timestamp is already set (i.e.
this isn't the first time logMonitor.start() has been called), or
the Supervisor container metadata couldn't be acquired, use the
Supervisor process uptime as the default. This has the downside of
missing any logs generated during SV downtime, but at least
means the log-streamer can proceed without error.
Signed-off-by: Christina Ying Wang <christina@balena.io>
Add `os-power-mode.service`, `nvpmodel.service`, and `os-fan-profile.service`
which report status from applying power mode and fan profile configs as read
from config.json. The Supervisor sets these configs in config.json for these
host services to pick up and apply.
Also add host log streaming from `jetson-qspi-manager.service` as it
will very soon be needed for Jetson Orins.
Relates-to: #2379
See: balena-io/open-balena-api#1792
See: balena-os/balena-jetson-orin#513
Change-type: minor
Signed-off-by: Christina Ying Wang <christina@balena.io>
This adds update-lock support to hostname changes via the host-config
endpoint, in addition to proxy changes as changing the hostname may
cause an engine restart from the OS.
Change-type: minor
Locks could remain from a previous supervisor run that didn't get to
settle the state. This ensures that cleanup will happen for remaining
locks every time the state is settled.
Change-type: patch
We only allow DNS requests through `balena0` interface, but this
is the default Docker bridge which is used for containers that
don't have a custom bridge. However, the Supervisor creates a
custom bridge for all containers unless another network mode is
specified.
Change-type: patch
Signed-off-by: Christina Ying Wang <christina@balena.io>
Resolve an issue in balenaMachine instances that were installed at <v14.1.0,
in which a Supervisor app with random UUID is kept in the target db due to its appId
being the same, even after the BM instance has upgraded to v14.1.0 which patches
the correct reserved Supervisor app UUIDs in. This results in two Supervisors running
on devices under the BM instance which persists after BM upgrade.
See: https://balena.fibery.io/search/T7ozi#Inputs/Pattern/Two-supervisors-are-running-on-device-3370
Change-type: patch
Signed-off-by: Christina Ying Wang <christina@balena.io>