Last Updated: 2021-05-16
Warning: This are WIP notes. Don't take this document in its current form too seriously
I started working on an existing php project which was fully dockerized locally. Unfortunately it was a black box to me, so this was my process of figuring out how Docker worked (generally and in this project)
There are often two Docker related files in the root directory:
$ docker ps. But TBH when debugging, you probably want
$ docker ps -asince this also lists non-running containers.
$ docker container ls. This also shows the container IDs.
$ docker stop b5b82dc2f650- last bit is a container ID. You can just fill in the first few chars and it will do the rest (like git and SHA1s)
$ docker system prune -a
$ docker container run IMAGE_NAME CMDe.g.
$ docker container run alpine ls -l
$ docker exec my_container ping google.com- thie pings google within my_container (which is already running)
$ docker exec -it app bash, where app is the container name. The
-itbit is for "interactive" and "attach TTY"
$ docker exec -u root -it 750af8a bash(specify user with
docker container run alpine commandcommands (i.e. multiple commands on same image), they will not effect one another. Each has own filesystem etc.
$ docker run, you can actually get access to that modified system state by finding the container ID with
$ docker container ls -athen running
$ docker container start CONTAINER_IDand the running
docker container exec CONTAINER_ID xyz
$ docker image pull NAMEe.g.
$ docker image pull alpine
DockerfileE.g. you could write a bunch of commands there, then run (in the same folder as this
$ docker image build -t DESIRED_IMAGE_NAME .e.g.
$ docker image build -t hello:v0.1 .(for name
$ docker image history CONTAINER_ID. This allows it to use cache.
$ docker image ls
Let's say you did a lot of manual tweaking inside a docker container. From
outside this container you can run
docker commit your_container_id and then
your system will be persisted exactly as is. This is not considered a best
practice, but can be very handy for quick and dirty deploys (e.g. prototypes).
Later you can boot this up with
docker run your_new_image_id.
Let's say you want to run a shell:
$ docker container run alpine /bin/sh
This version just exists immediately. Instead you have to add interactivity and attach a tty:
$ docker container run -it alpine /bin/sh
You will now be in the shell of the container, as you can confirm with
Containers that finished executing are hidden when you type
docker container ls but they show up if you write
docker container ls -a
$ docker container runthe results can be seen by looking at the
docker container inspect CONTAINER_ID. See the
UpperDirentry, specifically. Now run
lson that directory on your host system, and the data will be present. However, if you run
docker container rm CONTAINER_IDthat folder will be removed.
VOLUME ["/mydata"]. This time the data will be at the folder listed under the
$ docker container inspect
/mydatathen a folder called
/mydatais created in the container. Only data stored there is persisted.
/datain the container:
$ docker container run -ti -v /tmp:/data alpine sh
docker volume create --name html) or with docker compose. This named volume can then by assigned to a particular folder in the container:
wordpress_clithat expects wordpress to be present, you'll need to share a volume from the wordpress container to the wordpress_cli container
docker run -it --rm -v $(pwd):/usr/src/project tensorflow/tensorflow bash
Now when you visit
/usr/src/project it will have everything you need from the
docker network inspect bridge.
localhost:8001on the host machine? On the host run
inet. It will have an ip address of format
192.yThat is the address that you use instead of
localhostwhen you want to talk to the host machine. However this port will likely be firewalled, so you'll need to open it on the host. This may not be worth it. Thanks to: https://nickjanetakis.com/blog/docker-tip-35-connect-to-a-database-running-on-your-docker-host
$ docker run --name web1 -d -p 8080:80 nginx? It means traffic to port 8080 on the host machine is sent to 80 on the docker container. You can prove this by curling
curl 127.0.0.1:8080. Since nginx was running with this command, we should get an nginx output. If you are running docker on a publicly accessibe server, e.g. one you SSH'ed into, then, assuming no firewalls, from any arbitrary machine you could curl HOSTIP:8080 and it would work.
-p 80:80. But watch out: this can lead to trip ups, e.g. if someone has a local postgres database running that interferes with or is confused for the docker database.
docker run --name=apperson) is the hostname. Thus, for example, the database host for a laravel application might go from being set to
127.0.0.1on a non-dockerized codebase to being
mysqlin a dockerized one (if that was the name I gave to my mysql container). You can confirm this with
getent hosts localhost(this command
getent hosts, available on macos and linux resolves hosts to ip addresses - e.g. localhost is resolved to 127.0.0.1 on my mac)
docker run ... -network=mynetwork ...
docker network ls, you'll see a column
scope. By default, this says
localfor each network, meaning that the network exists only on this Dockerhost.
bridgeand then running
brctrl show- it'll list
docker0if you have docker running. By default, however, no interfaces will be connected to it.
ip aon the host system, you'll see info about the
docker0bridge. The subnet details (172.17.0.0/16 in my case) match what you'll see with
$ docker network inspect bridge
brctl show, you'll see an interface was added, corresponding to the container.
$ docker network create NAMEit uses bridge.
docker container run, which spin up an image, can run much faster than if they also needed to boot up a VM.
docker container runcommands
$ docker-compose up. Or to detach from terminal:
$ docker-compose up -d
composera PHP package manager.
$ docker-compose down
docker-compose downis not enough. You you need to do
docker-compose down --volumes
Here's how to interpret part of the
#MySQL Service db: image: mysql:5.7.22 container_name: db restart: unless-stopped tty: true ports: - "4306:3306" environment: MYSQL_DATABASE: laravel MYSQL_ROOT_PASSWORD: your_mysql_root_password volumes: - dbdata:/var/lib/mysql/ - ./mysql/my.cnf:/etc/mysql/my.cnf networks: - app-network # App service app: build: context: . dockerfile: Dockerfile
Big picture: There are two containers:
db. These are both automatically created. The container variables are essentially like command line arguments to
docker run for each individual entry E.g.
$ docker run -d --name app --restart unless-stopped app
image pulls a mysql image with v 5.7.22 from Dockerhub (or equivalent)
environment defines variables available on this image
restart: unless-stopped - on-failure policy. This one restarts a container if it fails... with the exception of when you deliberately stop a container.
tty- (NOT SURE) A tty is essentially a text input output environment aka shell. Setting this to true enables better interactivity via your local shell.
ports - syntax is "HOST:CONTAINER" - i.e. here it matches port 4606 on your local machine to port 3606 on the container.
dbdata). These volumes contents exist outside the lifetime of a container.
./mysql/my.cnfon the host machine ('./' so currency directory) will contain the same files as
/etc/mysql/my.cnfin the container. When the container is removed, the directory is unaffected on the host machine.
networks - here we're only setting a name to the network rather than configuring anything special
build - specifies location of
DockerFile used to build container. Used instead of
image. Here it's used because there is a custom
Pro tip to use locally, if you have multiple docker projects on your computer: give your service and container names differernt names in each project (otherwise they clash). E.g. I had
mailhog in projects and
mailhog in projectp. They did NOT work well together. Better names were
services: wordpress: volumes: # enter name and corresponding directory on first container - project_p:/var/www/html wordpress_cli: volumes: # enter name and corresponding directory on second container - project_p:/var/www/html # specify order depends_on: - wordpress # then declare volume generally volumes: project_p:
$ docker-compose -f docker-compose.yml -f docker-compose-production.yml build(uses both regular docker-composer.yml file and the docker-compose-production.yml file)
docker run -e MYSQL_USER=homestead mysql:latest
be careful: the docker composer commands (like
ps) will give different outputs depending on what you pass to it
$ docker-compose ps # No results Name Command State Ports
vs. when I pass the
docker-composer-development.yml as well
$ docker-compose -f docker-compose.yml -f docker-compose-development.yml ps Name Command State Ports ---------------------------------------------------- mysql docker-entrypoint.sh mysqld Exit 0
docker cp <containerid>:/<path> <host-path>
get the container ID with
$ docker container ls
then carry out with copy with something like:
docker container cp 267a3ee553e2:/var/www/html/wp-content/ .
The Add command accepts URLs
This is better than curling since, if you curl into docker, some ways of using volumes delete the file immediately. Also, curl (the package) might not be available.
# this works ADD https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar /usr/local/bin/wp # this does not curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar mv wp-cli.phar /usr/local/bin/wp # ERROR: File not found