![]() ![]() #using export is important since some of the commands in the script will fire in a subshell Sudo apt-get remove apt-listchanges -assume-yes -force-yes & ![]() Also Im afraid it might leave my server stuck at some point when updating. Is it possible to autoaccept service restarts or disable this screen? It breaks my whole script. The dist upgrade has a nasty user input screen at its end asking to restart services: At the beginning of the script there is a sudo apt-get dist-upgrade -yes. This way you will have more layers, but will have "updated" apt cache layer, and will install packages one by one.I have an unattended script for installing servers. One approach you can try is running each step in a new RUN statement: RUN \ ![]() This will take some time to update cache, install package and cleanup space, and it will generate only single image layer, so any other package installation you will need to apt-get update first to install packages. There are some practices to minimize docker image size, and put as few lines as possible, by combining them, so in case of package installation following pattern is used: RUN \ If you have bad connection packages might fail to download completely.Īs you said, each line in Dockerfile will create an image layer, and cache will be saved locally, to speed up image creation. Sometimes apt-get update is getting slow, it has nothing to do with docker itself. ![]() Some output or logs would be nice to answer your question. rw-r-r- 1 root root 27 Feb 6 03:37 docker-no-languagesĪpt is pre-configured to clear caches after every apt install, use compressed indeces, and to avoid suggesting any further packages. rw-r-r- 1 root root 70 Feb 6 03:37 docker-gzip-indexes rw-r-r- 1 root root 318 Feb 6 03:37 docker-clean rw-r-r- 1 root root 44 Feb 6 03:37 docker-autoremove-suggests r-r-r- 1 root root 1081 Feb 4 21:03 01autoremove-kernels Run this: docker run -rm -it ubuntu:18.04 bashĪnd take a look around /etc/apt to notice it has a few triggers in place specifically for docker builds. But the official ubuntu:18.04 image comes with a few improvements. Several people tend to customize FROM ubuntu and add helper scripts to trim package installation. It takes a long time to do an update with my internet and I'm on a virtual machine so my workflow is completely broken, lost so much time, is anyone else having issues with simple apt-get update & apt-get install? What are the best practices when dealing with apt-get in docker so ensure everything is working as it should? I am unable to see any deterministic behaviour to update and install, they seem to work when the wind is only blowing a certain direction. I've even had this package: python-glpk which was installing in the Dockerfile at first, but not anymore. Other packages seem to randomly work and not work. But in the docker file, even when I delete all the images created by each RUN line to stop the update from being cached, I cannot install wget with the exact same commands, it is absolute madness, and I feel as if I am going insane. If I go into the terminal in my container, and do an apt-get update then a apt-get install wget I can install wget. But I am having very very difficult times with installing new packages with apt-get in docker, and I'm not sure what is going on, I have spent hours now fidgeting around getting no where, I'm mainly using ubuntu as a base image.Ĭertain packages seem to randomly not install, for instance wget. So I know that with each RUN command docker creates a layer, or intimidate images if you will, and that this leads to caching apt-get update. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |