Introduction to Docker: Containerize Your Applications

Have you ever heard a developer say, “But it works on my machine!”?
It’s a classic frustration in the world of software development. Applications might behave perfectly on a developer’s laptop but then crash or break when deployed to a different environment like a server or a teammate’s machine. These problems usually happen because of inconsistent environments, dependency conflicts, or missing configuration files.

For decades, teams struggled with these challenges — wasting valuable time debugging and trying to “make it work” across different setups.
This is where Docker comes in.

Docker is a platform that uses a technology called containerization. Instead of worrying about whether the target machine has the right operating system, libraries, or settings, you can package everything your application needs into a container. A container is a lightweight, standalone environment that includes your code, runtime, libraries, and system tools — all bundled together.
Think of it like putting your app and everything it needs inside a sealed box that you can ship anywhere — and it will work exactly the same.

In this post, we’ll explore what Docker is, how containerization works, and walk through a simple hands-on example to bring it all together.
Whether you’re new to Docker or looking for a refresher, you’re in the right place.


1. What is Docker?

At its core, Docker is an open-source platform that allows you to build, package, and run applications inside containers.

But what exactly are containers?

Imagine you are shipping a fragile object overseas. If you simply place it on a truck, it might get damaged. Instead, you put it inside a sturdy, standardized shipping container.
Similarly, in the world of software, a container packages everything an application needs — its code, libraries, runtime, and settings — into a sealed, portable unit. This way, it can run consistently on any system, regardless of differences between development, testing, and production environments.

From Virtual Machines to Containerization

Before Docker, many teams used virtual machines (VMs) to solve environment issues. A virtual machine emulates an entire operating system, running a “guest” OS (like Linux) inside your real “host” OS (like Windows or macOS).
While powerful, VMs are heavyweight — they require lots of memory, disk space, and startup time.

Docker containers are different:
Instead of emulating an entire operating system, they share the host OS kernel — the core part of the operating system that manages processes and hardware.
This makes containers much faster, lighter, and more efficient than traditional VMs.


Key Docker Concepts

Before we go deeper, it’s important to understand a few key terms:

  • Container:
    A running instance of an application bundled with everything it needs. Think of it as a lightweight, isolated “mini-computer” specifically for your app. Containers are fast to start and use very little memory.
  • Image:
    A snapshot or blueprint used to create containers. An image includes the application code plus instructions on how to set up the environment.
    Example: python:3.9-slim is an image that gives you a small Linux system with Python 3.9 pre-installed.
  • Dockerfile:
    A recipe file (plain text) that contains step-by-step instructions to build an image. It tells Docker what base image to use, what software to install, what files to copy, and how to start the app.
  • Registry:
    A storage and sharing center for Docker images. You can upload your images to a registry like Docker Hub (the public registry) or a private registry like AWS Elastic Container Registry (ECR), so others can download and run them.

Why Docker Matters

Here’s why developers and companies around the world are using Docker today:

  •  Consistency: Eliminate “works on my machine” issues.
  •  Portability: Run the same container on Linux, macOS, or Windows.
  •  Scalability: Deploy microservices seamlessly.
  •  Resource Efficiency: Containers use fewer resources than VMs.

2. Docker Architecture: How It Works

To understand how Docker operates under the hood, you need to know about its basic architecture.
At a high level, Docker follows a client-server model — meaning that different components communicate with each other to make things happen behind the scenes.

Here’s how it all fits together:


The Docker Engine

Docker Engine is the core technology that powers Docker. It is made up of three main parts:

  • Docker Daemon (dockerd):
    The Docker Daemon is a background service that does all the heavy lifting.
    It listens for requests from clients and manages Docker objects such as images, containers, networks, and volumes.
    You don’t interact with the daemon directly; it works quietly behind the scenes to make sure your containers are created, started, and maintained.
  • Docker Client (docker CLI):
    This is the tool you use to tell Docker what to do.
    Whenever you type a command like docker build or docker run, you’re using the Docker command-line interface (CLI).
    The Docker Client translates your command into an API request and sends it to the Docker Daemon for processing.
  • Docker Registry:
    A registry is a remote storage location where Docker images are stored and shared.
    The most popular public registry is Docker Hub, but there are many others like Amazon Elastic Container Registry (ECR) or Google Container Registry (GCR).
    When you need an image, Docker pulls it from a registry; when you create a new image, you can push it to a registry to share it with others.

How a Container is Created and Run

Let’s walk through what happens when you run a container:

Networking Setup:
Docker sets up a private network so that containers can communicate with each other securely.
You can also configure containers to expose specific ports to your local machine or the outside world.

Command Issued:
You run a command like docker run flask-docker.

Image Check:
Docker checks if the requested image (flask-docker) exists locally on your machine.

Image Pull (if needed):
If the image is not found locally, Docker pulls it from a registry like Docker Hub.

Container Creation:
Using the image, Docker creates a new container — a lightweight, isolated environment ready to run your application.

Execution:
The container starts running based on the instructions defined inside the image (for example, it may run a Python web server if you built it that way).


3. Essential Docker Commands

To work with Docker effectively, you’ll need to be familiar with a few core commands.
Let’s walk through them — explaining what they do, when you would use them, and why they matter.

# Building an Image
docker build -t my-app .

This command tells Docker to create an image from your project directory.

  • docker build initiates the build process.
  • -t my-app gives the image a name (in this case, “my-app”) so you can refer to it later easily.
  • . tells Docker to look for a Dockerfile in the current directory (the “dot” means “current folder”).

🔵 Why build an image?
Think of an image like a recipe: it defines exactly how to set up an environment to run your app.

# Running a Container
docker run -p 5000:5000 my-app

This command starts a container based on the image you built.

  • docker run creates and starts the container.
  • -p 5000:5000 maps port 5000 on your local machine to port 5000 inside the container.
    (Useful for web apps — like a Flask server — so you can access it in your browser.)
  • my-app tells Docker which image to use.

🔵 What happens when you run this?
You’ll be able to open http://localhost:5000 and see your app running!


# Viewing Running Containers
docker ps

This shows all containers that are currently running.
You’ll see container IDs, names, images used, uptime, and ports exposed.


# Listing All Containers (Including Stopped Ones)
docker ps -a

Sometimes, containers crash or you stop them manually.
This command shows every container, whether it’s still running or not.


Viewing Container Logs
docker logs <container-id>

Need to debug what’s happening inside a container?
Use this command to view the output (stdout/stderr) from inside the container.

🔵 Tip:
You can find the <container-id> by running docker ps -a.


Pushing an Image to Docker Hub
docker push yourusername/my-app:v1

When you create a useful image, you might want to share it or deploy it somewhere else.
This command uploads your image to a Docker registry like Docker Hub.

  • yourusername/my-app is your Docker Hub username and image name.
  • :v1 is a tag indicating the version.

🔵 Important:
Before pushing, make sure you’re logged into Docker Hub using docker login.


4. Step-by-Step Tutorial: Containerize a Python Flask App

What is Flask?

Flask is a lightweight web framework for Python that allows you to easily build web applications. It’s popular because it’s simple to use, especially for beginners, and doesn’t come with a lot of extra overhead. Flask provides the essentials for web development, such as routing and request handling, but leaves you free to choose additional components as needed.

In this example, we’ll use Flask to build a very simple web application, and then we’ll containerize it using Docker. However, the primary focus here is learning how to use Docker and containerization, not Flask itself.

We start by creating a basic Flask app. The purpose here is just to demonstrate how to containerize a web application, so the code won’t be too complex.

Step 1: Create a Simple Flask Application

– Folder Structure:

flask-docker/

├── app.py
├── requirements.txt
└── Dockerfile

Create a project folder:

mkdir flask-docker
cd flask-docker

Here’s what each file is for:

  • app.py: Your main Python web app.
  • requirements.txt: A list of Python libraries your app needs.
  • Dockerfile: Instructions for Docker to build an image.

Step 2 : Write the Flask app (app.py):
pythonCopyEditfrom flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
    return "Hello from Docker!"

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

What’s happening here? Flask Application Overview:

app.run(host='0.0.0.0', port=5000): This starts the Flask development server. The host='0.0.0.0' part allows the app to be accessed from outside the container (when running in Docker).

Flask(__name__): This creates a new Flask application. __name__ is used to determine the root path for the app.

@app.route('/'): This defines the root route of our web application. When users visit the root URL (/), the home() function is executed.

Step 3: Define Dependencies in requirements.txt

To run the Flask app, we need to install Flask. This is done via the requirements.txt file, which lists all the Python packages needed for the project.

requirements.txt:

flask==2.0.1

This tells Docker (and humans) which Python packages the app needs.
Only Flask is required here!


Step 3: Create the Dockerfile

Now, let’s teach Docker how to build our app into a container.The Dockerfile is a set of instructions that tells Docker how to build and run our application in a container.

# Start with a lightweight Python base image
FROM python:3.9-slim

# Set the working directory inside the container
WORKDIR /app

# Copy only the requirements first, then install them
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

# Tell Docker which port the app will use
EXPOSE 5000

# Define the default command to run the app
CMD ["python", "app.py"]

Explanation of Dockerfile Commands:

  • FROM python:3.9-slim: This sets the base image for the container. It uses a lightweight version of Python 3.9.
  • WORKDIR /app: This sets the working directory inside the container to /app. All subsequent commands will run in this directory.
  • COPY requirements.txt .: This copies the requirements.txt file into the /app directory in the container.
  • RUN pip install --no-cache-dir -r requirements.txt: This installs the Python dependencies listed in requirements.txt (in this case, just Flask).
  • COPY . .: This copies the entire content of your local directory (the Flask app code) into the container.
  • EXPOSE 5000: This tells Docker to expose port 5000, which is where Flask runs by default.
  • CMD ["python", "app.py"]: This is the command that runs when the container starts. It runs the Flask app (app.py).

Step 3: Build and Run the Container

Once the Flask app and Dockerfile are set up, we can build the Docker image and run the container.

  1. Build the Docker Image:
docker build -t flask-docker .

This command creates a Docker image named flask-docker based on the instructions in the Dockerfile.


  1. Run the Container:
docker run -p 5000:5000 flask-docker

This command runs the container and maps port 5000 on your local machine to port 5000 inside the container (where Flask is running).

Now visit http://localhost:5000 — you should see “Hello from Docker!” 🎉


Step 4: Push Your Image to Docker Hub

Want to share your container with the world or deploy it elsewhere?
You need to upload (push) it to a Docker registry like Docker Hub.

What is Docker Hub?

Docker Hub is a popular cloud registry that allows developers to share Docker images with others. When you push your image to Docker Hub, you can access it from anywhere and run it on any machine with Docker installed.

First, tag the image properly:

docker tag flask-docker yourusername/flask-docker:v1
  • yourusername: Replace this with your Docker Hub username.
  • v1: A version tag for your image.

This command tags the image flask-docker with the name yourusername/flask-docker and version v1. The tag allows you to identify the image on Docker Hub.

Then, push the image:

docker push yourusername/flask-docker:v1

This command uploads the flask-docker:v1 image to your Docker Hub repository.


5. Best Practices for Docker in Production

While Docker is powerful, it’s important to follow a few best practices to ensure your containers are efficient, secure, and maintainable. Let’s cover the essentials.

As you start using Docker more, it’s helpful to know a few basic best practices. These practices will help ensure that your containers are easy to work with and ready for real-world applications.

1. Keep Images Small

To make your Docker images faster to download and easier to store, it’s important to keep them as small as possible.

  • Use Slim Base Images: Start with minimal base images (e.g., python:3.9-slim) to keep your images small.
  • Remove Unnecessary Files: Use a .dockerignore file to exclude unnecessary files like logs, temporary files, or development dependencies.

2. Stay Secure

Security is important for any application, and Docker is no different. A few easy practices can help keep your containers secure.

  • Avoid running containers with too many privileges. Use non-root users when possible.
  • Always use official images, which are kept up to date and secure.

3. Use Volumes for Data

Docker containers are temporary, which means any data stored inside them will be lost if the container stops or is removed. To keep data safe and persistent, use Docker volumes.

  • Why Volumes?: Volumes allow data to persist outside the container, which is helpful when containers are stopped or recreated.

4. Log Your Application Output

Capturing logs is important for tracking your application’s behavior and finding issues. Ensure your app writes logs to the standard output, so Docker can capture them for you.

  • Make sure your application logs useful information to stdout (standard output), which Docker can then access easily.
5. Using Docker Compose for Multi-Container Apps

When your application grows, you may need to run multiple containers at once (for example, one container for the web app and another for the database). Docker Compose makes this easy. It allows you to define and run multi-container applications with a simple YAML configuration file (docker-compose.yml).

While you don’t need Docker Compose for every project, it’s helpful for more complex setups. It simplifies container management by allowing you to start all the containers in your app with a single command: docker-compose up


6. Docker vs. Traditional Virtualization

When it comes to virtualization, there are two main approaches: traditional virtual machines (VMs) and Docker containers. While both technologies help you run applications in isolated environments, they work in very different ways.

1. Architecture
  • Virtual Machines: A virtual machine is like a computer inside your computer. Each VM runs its own full operating system (OS) and needs a lot of resources to do so. The VM has its own kernel (the core part of the OS).
  • Docker Containers: A container, on the other hand, shares the OS kernel with the host machine. It uses less system resources because it doesn’t need to run its own OS—just the application and the necessary libraries.
2. Resource Usage
  • Virtual Machines: Because each VM runs a complete operating system, it consumes more resources (memory, CPU, storage). This can lead to slower performance and higher costs if you’re running many VMs.
  • Docker Containers: Containers are much more lightweight. They only include the application and the essential libraries, which means they take up less space and run faster.
3. Startup Time
  • Virtual Machines: VMs can take a while to start because they need to boot up the entire OS.
  • Docker Containers: Containers can start almost instantly because they don’t need to boot a full OS—just the application and its dependencies.
4. Portability
  • Virtual Machines: VMs are less portable. They depend on the hypervisor (the software that manages VMs), and this may vary between environments.
  • Docker Containers: Containers are highly portable. You can run the same container on different systems (whether it’s Linux, macOS, or Windows) as long as Docker is installed.
5. Management
  • Virtual Machines: VMs are typically managed using specialized tools for virtualization (like VMware or Hyper-V). Managing many VMs can become complex.
  • Docker Containers: Docker makes it easier to manage containers, especially when using tools like Docker Compose for multi-container apps or Kubernetes for scaling.

Why Docker is Ideal for Modern Development

These differences make Docker containers much better suited for modern development practices, like agile workflows and microservices architectures. Docker helps you deploy applications faster, save resources, and maintain more efficient and scalable environments.



Conclusion

Congratulations on completing the guide! By now, you should have a solid understanding of what Docker is and how it helps solve common development problems like inconsistent environments and slow deployments. You’ve also learned how to containerize your first application, which is a huge step toward mastering modern software development.

Docker isn’t just a tool—it’s a powerful concept that allows you to work more efficiently, scale your applications, and collaborate with your team in a seamless way. As you continue your learning journey, don’t hesitate to experiment with more complex Docker setups, try using Docker Compose for multi-container apps, or explore orchestration tools like Kubernetes.

Remember, Docker is all about efficiency, portability, and scalability. By embracing containerization, you’re joining a community of developers who are shaping the future of software development.

  • Ready to dive deeper? Try containerizing a small project today and see the magic of Docker in action.
  • Experiment with Docker Compose to manage multiple containers.
  • Keep learning: There’s always more to explore in the world of Docker, from advanced networking to orchestration tools like Kubernetes.

Stay tuned for the next post…See ya 👋

"Found this helpful? Spread the knowledge and share it with your network!"
Mohamad Sabha
Mohamad Sabha
Articles: 20

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *