Add OCI container extractor#81
Conversation
| in | ||
| mkIf (specNix != null && specNix.path != []) ( | ||
| let | ||
| cfg = attrByPath specNix.path {} config; |
There was a problem hiding this comment.
Match the NixOS services by the option path
| # The config should only be applied on the toplevel topology module, | ||
| # not for each nixos node. | ||
| config = {}; | ||
| config.topology.serviceRegistry = serviceDefs; |
There was a problem hiding this comment.
Not sure about this line but it was the only way I could make it to work (I needed to expose the service registry to the NixOS configurations)
| }; | ||
| in | ||
| f { | ||
| options.serviceRegistry = mkOption { |
There was a problem hiding this comment.
Expose the service registry as an option (for overriding and customizations)
| map (svc: {inherit node svc;}) visible | ||
| ) (attrValues config.nodes); | ||
|
|
||
| # Deduplicate services by serviceId in overview |
There was a problem hiding this comment.
In the (supposedly rare) case when the same service deployed both as a container and a NixOS service display it only once in the overview
| repoLower = strings.toLower repo; | ||
| in | ||
| canon == repoLower || strings.hasSuffix ("/" + repoLower) canon; | ||
| matchesRepos = img: repos: builtins.any (suffixMatch img) repos; |
There was a problem hiding this comment.
Match the containerized services by the image reference
There was a problem hiding this comment.
When I run this branch on my infrastructure it fails:
trace: oci-container extractor: UNMATCHED containers → shlink (shlinkio/shlink@sha256:1a697baca56ab8821783e0ce53eb4fb22e51bb66749ec50581adc0cb6d031d7a), slink (anirdev/slink@sha256:98b9442696f0a8cbc92f0447f54fa4bad227af5dcfd6680545fedab2ed28ddd9)
trace: oci-container extractor: UNMATCHED containers → koillection (koillection/koillection@sha256:bb8ad2b6891441d8ec5c3169b684b71574f3bb3e9afb345bad2f91d833d60340)
and then later
error: attribute 'BaseUrl' missing
at /nix/store/0djp82053nsd8pfmiwywmdhr9nbj48z8-source/nixos/builtin-service-defs.nix:999:51:
998| enabled = cfg: cfg.enable or false;
999| infoFn = cfg: mkIf (cfg.settings ? BaseUrl) cfg.settings.BaseUrl;
| ^
1000| detailsFn =
Given, I have not looked into how I can configure this, I simply ran it on my infra as is, so I might be missing something here.
Also tried adding to the service registry, e.g. for koillection: topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];; while that makes the UNMATCHED disappear, it does not fix the above BaseUrl error
There was a problem hiding this comment.
The BaseUrl error should be fixed now.
Also tried adding to the service registry, e.g. for koillection:
topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];
Yes, this is the intended way to match custom services.
| builtins.trace | ||
| ("oci-container extractor: UNMATCHED containers → " | ||
| + (concatStringsSep ", " | ||
| (map (c: "${c._name} (${c.image})") unmatched))) |
There was a problem hiding this comment.
Output the unmatched containers
75f8e34 to
066dadc
Compare
Swarsel
left a comment
There was a problem hiding this comment.
Basically comments on what I encountered when trying to quickly run this on my config without looking at all into the specific configurations provided, so it might be I am doing something wrong
| repoLower = strings.toLower repo; | ||
| in | ||
| canon == repoLower || strings.hasSuffix ("/" + repoLower) canon; | ||
| matchesRepos = img: repos: builtins.any (suffixMatch img) repos; |
There was a problem hiding this comment.
When I run this branch on my infrastructure it fails:
trace: oci-container extractor: UNMATCHED containers → shlink (shlinkio/shlink@sha256:1a697baca56ab8821783e0ce53eb4fb22e51bb66749ec50581adc0cb6d031d7a), slink (anirdev/slink@sha256:98b9442696f0a8cbc92f0447f54fa4bad227af5dcfd6680545fedab2ed28ddd9)
trace: oci-container extractor: UNMATCHED containers → koillection (koillection/koillection@sha256:bb8ad2b6891441d8ec5c3169b684b71574f3bb3e9afb345bad2f91d833d60340)
and then later
error: attribute 'BaseUrl' missing
at /nix/store/0djp82053nsd8pfmiwywmdhr9nbj48z8-source/nixos/builtin-service-defs.nix:999:51:
998| enabled = cfg: cfg.enable or false;
999| infoFn = cfg: mkIf (cfg.settings ? BaseUrl) cfg.settings.BaseUrl;
| ^
1000| detailsFn =
Given, I have not looked into how I can configure this, I simply ran it on my infra as is, so I might be missing something here.
Also tried adding to the service registry, e.g. for koillection: topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];; while that makes the UNMATCHED disappear, it does not fix the above BaseUrl error
There was a problem hiding this comment.
Currently this fails to build using nix build <flake>#docs because flake-utils is not a flake input. I would simply omit using it here
There was a problem hiding this comment.
Thanks, omited it, it was a leftover from before flake-utils were removed from the project
| }; | ||
| }; | ||
|
|
||
| config.topology.self = mkIf config.topology.extractors.oci-container.enable { |
There was a problem hiding this comment.
Currently, this renders as such (this is the added example):

Might be subjective, but I think it would be cool if we would render the containers as separate nodes (as in #126)
There was a problem hiding this comment.
Could you clarify what's your reasoning for this? In my mind most containers will provide a single service, so they are rendered the same way as NixOS services, inside the parent host.
| oci = { | ||
| repos = [ "forgejo/forgejo" ]; | ||
| }; | ||
| }; |
There was a problem hiding this comment.
Now my config fails to build with error: A definition for option topology.nodes.summers-forgejo.services.forgejo.details.name is not of type submodule. [...].
At this point it seems likely that other services also have problems? I am not sure how you generated these service definitions, but the approach might need another look
There was a problem hiding this comment.
Thanks for your review and recommendations! I adapted info/details functions from the existing list of services, but obviously some of the options are out of date and I need to be much more defensive in defining these functions so things don't blow up. I'll go through them, update the option names and add checks for nulls/non-existent names.
There was a problem hiding this comment.
The current services on main should be working for the most part (at least I can vouch for forgejo)
There was a problem hiding this comment.
Ok, I added a test flake config to check the successful evaluation of each service definition. forgejo/gitea were indeed adapted a bit inaccurately, but now everything should work (I hope).
There was a problem hiding this comment.
Sorry this kinda flew under my radar; I will try to check this out again tomorrow or so
I deploy services as OCI containers on NixOS, so I wanted to create an extractor for
virtualisation.oci-containers.containers. I had to do a little refactoring and extract the service registry so that the service extractors could share the service names and icons.It works by matching the
virtualisation.oci-containers.containers.<name>.imageoption with a list of known service container repository references (mostly official ones), e.g.jellyfin/jellyfinandlinuxserver/jellyfinfor Jellyfin.Users can define custom functions to extract info and details from their container configurations, as well as override and update the service registry.