Skip to content

Add OCI container extractor#81

Open
johnspade wants to merge 6 commits intooddlama:mainfrom
johnspade:feat/oci-extractor
Open

Add OCI container extractor#81
johnspade wants to merge 6 commits intooddlama:mainfrom
johnspade:feat/oci-extractor

Conversation

@johnspade
Copy link
Copy Markdown

I deploy services as OCI containers on NixOS, so I wanted to create an extractor for virtualisation.oci-containers.containers. I had to do a little refactoring and extract the service registry so that the service extractors could share the service names and icons.

It works by matching the virtualisation.oci-containers.containers.<name>.image option with a list of known service container repository references (mostly official ones), e.g. jellyfin/jellyfin and linuxserver/jellyfin for Jellyfin.

Users can define custom functions to extract info and details from their container configurations, as well as override and update the service registry.

Comment thread nixos/extractors/services.nix Outdated
in
mkIf (specNix != null && specNix.path != []) (
let
cfg = attrByPath specNix.path {} config;
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Match the NixOS services by the option path

Comment thread nixos/module.nix Outdated
# The config should only be applied on the toplevel topology module,
# not for each nixos node.
config = {};
config.topology.serviceRegistry = serviceDefs;
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this line but it was the only way I could make it to work (I needed to expose the service registry to the NixOS configurations)

Comment thread options/services-registry.nix Outdated
};
in
f {
options.serviceRegistry = mkOption {
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expose the service registry as an option (for overriding and customizations)

Comment thread topology/renderers/svg/default.nix Outdated
map (svc: {inherit node svc;}) visible
) (attrValues config.nodes);

# Deduplicate services by serviceId in overview
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the (supposedly rare) case when the same service deployed both as a container and a NixOS service display it only once in the overview

repoLower = strings.toLower repo;
in
canon == repoLower || strings.hasSuffix ("/" + repoLower) canon;
matchesRepos = img: repos: builtins.any (suffixMatch img) repos;
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Match the containerized services by the image reference

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I run this branch on my infrastructure it fails:

trace: oci-container extractor: UNMATCHED containers → shlink (shlinkio/shlink@sha256:1a697baca56ab8821783e0ce53eb4fb22e51bb66749ec50581adc0cb6d031d7a), slink (anirdev/slink@sha256:98b9442696f0a8cbc92f0447f54fa4bad227af5dcfd6680545fedab2ed28ddd9)
trace: oci-container extractor: UNMATCHED containers → koillection (koillection/koillection@sha256:bb8ad2b6891441d8ec5c3169b684b71574f3bb3e9afb345bad2f91d833d60340)

and then later

error: attribute 'BaseUrl' missing
at /nix/store/0djp82053nsd8pfmiwywmdhr9nbj48z8-source/nixos/builtin-service-defs.nix:999:51:
  998|       enabled = cfg: cfg.enable or false;
  999|       infoFn = cfg: mkIf (cfg.settings ? BaseUrl) cfg.settings.BaseUrl;
     |                                                   ^
 1000|       detailsFn =

Given, I have not looked into how I can configure this, I simply ran it on my infra as is, so I might be missing something here.

Also tried adding to the service registry, e.g. for koillection: topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];; while that makes the UNMATCHED disappear, it does not fix the above BaseUrl error

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The BaseUrl error should be fixed now.

Also tried adding to the service registry, e.g. for koillection: topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];

Yes, this is the intended way to match custom services.

Comment thread nixos/extractors/oci-container.nix Outdated
builtins.trace
("oci-container extractor: UNMATCHED containers → "
+ (concatStringsSep ", "
(map (c: "${c._name} (${c.image})") unmatched)))
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output the unmatched containers

@johnspade johnspade marked this pull request as ready for review June 18, 2025 13:48
Copy link
Copy Markdown
Contributor

@Swarsel Swarsel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically comments on what I encountered when trying to quickly run this on my config without looking at all into the specific configurations provided, so it might be I am doing something wrong

Comment thread examples/oci-containers/flake.nix Outdated
Comment thread examples/oci-containers/flake.nix Outdated
Comment thread examples/oci-containers/flake.nix Outdated
repoLower = strings.toLower repo;
in
canon == repoLower || strings.hasSuffix ("/" + repoLower) canon;
matchesRepos = img: repos: builtins.any (suffixMatch img) repos;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I run this branch on my infrastructure it fails:

trace: oci-container extractor: UNMATCHED containers → shlink (shlinkio/shlink@sha256:1a697baca56ab8821783e0ce53eb4fb22e51bb66749ec50581adc0cb6d031d7a), slink (anirdev/slink@sha256:98b9442696f0a8cbc92f0447f54fa4bad227af5dcfd6680545fedab2ed28ddd9)
trace: oci-container extractor: UNMATCHED containers → koillection (koillection/koillection@sha256:bb8ad2b6891441d8ec5c3169b684b71574f3bb3e9afb345bad2f91d833d60340)

and then later

error: attribute 'BaseUrl' missing
at /nix/store/0djp82053nsd8pfmiwywmdhr9nbj48z8-source/nixos/builtin-service-defs.nix:999:51:
  998|       enabled = cfg: cfg.enable or false;
  999|       infoFn = cfg: mkIf (cfg.settings ? BaseUrl) cfg.settings.BaseUrl;
     |                                                   ^
 1000|       detailsFn =

Given, I have not looked into how I can configure this, I simply ran it on my infra as is, so I might be missing something here.

Also tried adding to the service registry, e.g. for koillection: topology.serviceRegistry.koillection.oci.repos = [ “koillection” ];; while that makes the UNMATCHED disappear, it does not fix the above BaseUrl error

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this fails to build using nix build <flake>#docs because flake-utils is not a flake input. I would simply omit using it here

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, omited it, it was a leftover from before flake-utils were removed from the project

};
};

config.topology.self = mkIf config.topology.extractors.oci-container.enable {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, this renders as such (this is the added example):
image

Might be subjective, but I think it would be cool if we would render the containers as separate nodes (as in #126)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify what's your reasoning for this? In my mind most containers will provide a single service, so they are rendered the same way as NixOS services, inside the parent host.

oci = {
repos = [ "forgejo/forgejo" ];
};
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now my config fails to build with error: A definition for option topology.nodes.summers-forgejo.services.forgejo.details.name is not of type submodule. [...].

At this point it seems likely that other services also have problems? I am not sure how you generated these service definitions, but the approach might need another look

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review and recommendations! I adapted info/details functions from the existing list of services, but obviously some of the options are out of date and I need to be much more defensive in defining these functions so things don't blow up. I'll go through them, update the option names and add checks for nulls/non-existent names.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current services on main should be working for the most part (at least I can vouch for forgejo)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I added a test flake config to check the successful evaluation of each service definition. forgejo/gitea were indeed adapted a bit inaccurately, but now everything should work (I hope).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry this kinda flew under my radar; I will try to check this out again tomorrow or so

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants