Sovereign Cloud Stack

One platform — standardized, built and operated by many.

Revisited: Air gapped installation

Christian Berendt July 19, 2023

A few months ago we wrote a concept for the implementation of air gapped installations from the infrastructure layer of Sovereign Cloud Stack (SCS). Sometimes the opportunity arises to completely re-evaluate and re-analyze a concept with some distance. If this possibility does not arise, it should be explicitly planned and created.

One of the pillars of SCS is to pick up and use existing open source building blocks. For once this avoids stretching the limited project budget unnecessarily. Even more important this helps strengthen the overall open-source ecosystem and helps to create a sustainable solution with broad acceptance. The available and established solutions are not always a perfect match and often you have to adapt to the realities. But even then, more complex own developments should be avoided whenever possible. If existing technology does not match perfectly, you should first try to adapt the problem in order to be able to use the existing technology anyway. If this does not work, it should be examined whether existing technology can be extended to make it suitable.

Back to the air gap installation. We had created a concept by providing a variety of individual services, some of them own developments, to mirror the individual sources like APT packages used within the infrastructure layer. After re-evaluating this approach, we have to note that something difficult to maintain has emerged. We have asked ourselves who will finish the services and who will ensure that they remain in good condition and usable for the next few years. In the end, we came to the conclusion that it was not worth the time to continue with the previous concept.

So back to start? Not complete. The original idea of splitting the problem in two, supplying the nodes of the control and data planes themselves, as well as supplying the management planes and the hedging of our build processes were not discarded. The approach with Squid as the middle layer between management plane and control and data plane remains and is fully implemented. This ensures that all internal nodes are supplied exclusively via the management plane.

On the management plane, all services planned in the original concept were discarded. Pulp, a software repository management that can be extended with plugins, takes their place now. Pulp is established. Practically, all plugins we need are already available and everything is controllable via API.

As a first step, a MVP of a Pulp service has now been built on the OSISM project side and broadly populated with all Ubuntu and Ansible collections & roles. Currently, the CI is being switched to pulp as the primary source. Packages of PyPi, which are only required for the build of the OSISM container images, will be added in this process.

Afterwards, a role osism.services.pulp will be provided with which the Pulp service can be deployed on the management plane. The synchronization with the Pulp service from the OSISM project will then be integrated into the OSISM CLI.

Completion: In time for the next major release of SCS.

About the author

Christian Berendt
Founder & CEO of OSISM @ OSISM GmbH
Christian has been involved with open source software for many years, specifically OpenStack and open infrastructure in general. He has been involved in the SCS project since the very beginning. When he's not working or spending time with his family, he's usually in the backyard running. Or plays with microcontrollers.