Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s2i with buildah support #1031

Open
rajivml opened this issue Mar 24, 2020 · 6 comments
Open

s2i with buildah support #1031

rajivml opened this issue Mar 24, 2020 · 6 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@rajivml
Copy link

rajivml commented Mar 24, 2020

HI,

@otaviof I have seen this PR #1003, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?#1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .

Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod

@otaviof
Copy link
Member

otaviof commented Mar 24, 2020

Hello @rajivml!

I have seen this PR #1016, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?

Yes, that's correct. Using buildah as your container manager would avoid having to run Docker-in-Docker setup, described in #1016.

#1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .

However, you would still need to investigate the root of this issue. Using buildah as a container manager would save you resources, but I would not be able to determine if that's enough to solve the memory pressure issue.

Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod

That's true for buildah as much as for DinD. In the DinD setup you're using has a sidecar running the docker instance, and Kubernetes sidecars are in the same POD.

Using buildah container manager would avoid having to run a sidecar and makes things fairly simpler.

@rajivml
Copy link
Author

rajivml commented Mar 25, 2020

Thanks @otaviof , for your immediate reply on this,

I have seen this PR #1016, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?

Yes, that's correct. Using buildah as your container manager would avoid having to run Docker-in-Docker setup, described in #1016.

#1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .

However, you would still need to investigate the root of this issue. Using buildah as a container manager would save you resources, but I would not be able to determine if that's enough to solve the memory pressure issue.

Yeah true, we are investigating the issue but we are not able to pinpoint the issue, I think since we are setting the limits on the Pods the s2i container that is getting spawned is running in best effort mode and is getting killed as soon as kernel detects OOM scenario, we are trying to reduce the no of pods which are responsible for building docker images so that we throttle the requests and also we will try setting the limits on the pods well

Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod

That's true for buildah as much as for DinD. In the DinD setup you're using has a sidecar running the docker instance, and Kubernetes sidecars are in the same POD.

Thanks for letting me know how s2i containers runs in DinD mode, I am not aware that the s2i container spawned runs within the same Pod which triggered the build, am under assumption that this container will be spawned on the same Node where DinD pod is running and we don't have control on this container, if it's running as a side car within the same DinD pod, then I think setting the limits on the Pod should definitely help

Using buildah container manager would avoid having to run a sidecar and makes things fairly simpler.

BTW, When are we targeting to merge this PR, any idea ? because the DinD thing was raised as an security issue as well

@rajivml
Copy link
Author

rajivml commented May 31, 2020

HI @otaviof , just want to check if buildah support is still under radar, because I don't see much activity over this PR #1003 , so want to check if they are any timelines that you guys are targeting, if this is a long shot, then I will look at other alternatives for builds rather than using dind

@otaviof
Copy link
Member

otaviof commented Jun 9, 2020

@rajivml I'm afraid the support for buildah shall take some more thinking and efforts before we can introduce it. Sorry, it might take longer than expected.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2020
@adambkaplan
Copy link
Contributor

/lifecycle frozen

This is a feature we want for the future of s2i.

@openshift-ci-robot openshift-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants