Clark Boylan 043f066380 Use podman to build non docker hub container images
We have two sets of image build jobs. The first are targetted
specifically at docker and docker hub. The second set use the generic
container image roles and jobs from zuul/zuul-jobs. In this second set
we have the choice of using either podman or docker. Choose podman
because podman plays nicer with mirroring images hosted outside of
docker hub. This is important for image builds like Gerrit where we
build a base image and a gerrit version specific image and we need to
lookup the base image from a mirror of a hosted versions outside of
docker hub.

The main drawback to keep in mind here is that podman handles multi arch
container image builds differently to docker. This means if/when we get
to porting the python base image builds that are multiarch to quay and
pordman we may need to add additional support for multiarch. Though
currently only nodepool-builder relies on this and it is being replaced
by zuul-launcher so we may just sidestep the issue entirely.

We update the hound Dockerfile to force a rebuild of this image. The
reason for that is that image is the only one currently affected by the
change to build images with podman. This ensures we don't discover
problems with podman building hound images later when we have a
different reason to rebuild that image.

Finally while we are at it drop container_command from the mirror
container images job because that job uses skopeo now and doesn't rely
on podman or docker. This should reduce overall confusion when trying to
understand the behavior of our jobs.

Change-Id: Ie7a309452e33e0996702c849167b7881d79db5fb
2025-04-23 10:07:03 -07:00
2022-12-06 11:04:08 -06:00
2025-04-10 07:49:55 -07:00
2016-07-15 12:04:48 -07:00
2019-04-19 19:26:05 +00:00
2018-11-02 08:19:53 +11:00
2019-04-20 09:31:14 -07:00
2022-05-30 12:57:48 -07:00
2014-09-30 12:40:59 -07:00
2018-06-25 11:19:43 +10:00
2024-08-21 16:41:37 -07:00

OpenDev System Configuration

This is the machinery that drives the configuration, testing, continuous integration and deployment of services provided by the OpenDev project.

Services are driven by Ansible playbooks and associated roles stored here. If you are interested in the configuration of a particular service, starting at playbooks/service-<name>.yaml will show you how it is configured.

Most services are deployed via containers; many of them are built or customised in this repository; see docker/.

A small number of legacy services are still configured with Puppet. Although the act of running puppet on these hosts is managed by Ansible, the actual core of their orchestration lives in manifests and modules.

The files in this repository are provided as an opinionated example service deployment, and to allow the OpenDev Collaboratory to use public software development workflows in order to coordinate changes and improvements to the systems it runs. This repository is not intended as a reconsumable project on its own, and anyone wishing to adjust it to suit their own needs should do so with a fork. The system-config reviewers are unable to evaluate and support use cases for the contents here other than their own.

Testing

OpenDev infrastructure runs a complete testing and continuous-integration environment, powered by Zuul.

Any changes to playbooks, roles or containers will trigger jobs to thoroughly test those changes.

Tests run the orchestration for the modified services on test nodes assigned to the job. After the testing deployment is configured (validating the basic environment at least starts running), specific tests are configured in the testinfra directory to validate functionality.

Continuous Deployment

Once changes are reviewed and committed, they will be applied automatically to the production hosts. This is done by Zuul jobs running in the deploy pipeline. At any one time, you may see these jobs running live on the status page or you could check historical runs on the pipeline results (note there is also an opendev-prod-hourly pipeline, which ensures things like upstream package updates or certificate renewals are incorporated in a timely fashion).

Contributing

Contributions are welcome!

You do not need any special permissions to make contributions, even those that will affect production services. Your changes will be automatically tested, reviewed by humans and, once accepted, deployed automatically.

Bug fixes or modifications to existing code are great places to start, and you will see the results of your changes in CI testing. Please remember that this repository consists of configuration and orchestration for OpenDev Collaboratory production systems, so contributions to it will be evaluated on the basis of whether they're useful or applicable to OpenDev's services. Changes intended to make the contents more easily reusable outside OpenDev itself are not in scope, and so will be rejected by reviewers.

You can develop all the playbooks, roles, containers and testing required for a new service just by uploading a change. Using a similar service as a template is generally a good place to start. If deploying to production will require new compute resources (servers, volumes, etc.) these will have to be deployed by an OpenDev administrator before your code is committed. Thus if you know you will need new resources, it is best to coordinate this before review.

The #opendev IRC on OFTC channel is the main place for interactive discussion. Feel free to ask any questions and someone will try to help ASAP. The OpenDev meeting is a co-ordinated time to synchronize on infrastructure issues. Issues should be added to the agenda for discussion; even if you can not attend, you can raise your issue and check back on the logs later. There is also the service-discuss mailing list where you are welcome to send queries or questions.

Documentation

The latest documentation is available at https://docs.opendev.org/opendev/system-config/latest/

That documentation is generated from this repository. You can geneate it yourself with tox -e docs.

Description
System configuration for the OpenDev Collaboratory
Readme 154 MiB
Languages
Jinja 37%
Python 36.7%
Shell 13.6%
Dockerfile 3.9%
JavaScript 3%
Other 5.8%