// personal learning system

IT Roadmap
2025 → 2027

Networking + Infra DevOps Cybersec (secondary) AI/MLOps (later) TVZ 2nd year
Overall Progress 0%
// Core Rule — Never Break This

Every new technology must solve a problem created by the previous phase. If you can't articulate what problem a new tool solves in your existing setup, you're not ready for it. This is how you avoid tutorial addiction and build real depth.

Phase 0
Foundation Hardening
Timeline: 6 weeks max · Start: Now
+
You already have one Dockerized app running in VirtualBox. That's level 1. This phase is about building the operational muscle memory that makes everything else make sense. You're not adding new tech — you're deepening what you already touched. The goal is to go from "it works" to "I know exactly why it works and how to fix it when it breaks."
// Docker Compose Depth
  • Rebuild Audiobookshelf from scratch using a proper docker-compose.ymlDon't copy-paste. Write it yourself. Understand every line.
  • Implement named volumes and bind mounts correctlyKnow the difference — when to use which and why.
  • Add healthchecks and restart policies to all containersrestart: unless-stopped is your friend. Healthchecks define when a container is actually "ready".
  • Use .env files for environment variables — no hardcoded secretsEvery password, port, and domain in a .env file. Add .env to .gitignore immediately.
  • Set up a custom bridge network in Docker ComposeServices on the same network talk by container name, not IP. Understand why.
  • Learn docker logs, docker exec, docker inspect commands coldYou should be able to debug any container issue without a GUI.
  • Deploy Portainer for container management UIUseful visual layer, but always know the CLI equivalent of what you're clicking.
// Linux Administration
  • Master systemd — enable, disable, start, stop, status, journalctlEvery service on Linux is managed by systemd. You need this daily.
  • Harden SSH — disable root login, use key-based auth only, change default portYour future self will thank you when this becomes habit.
  • Configure UFW firewall rules — only open what you needDefault deny inbound, allow only specific ports. Understand what each rule does.
  • Set up automated backups with rsync + cronWrite a bash script that backs up your Docker volumes to one of your HDDs on a schedule.
  • Understand file permissions deeply — chmod, chown, umaskBe able to explain rwxr-xr-x without looking it up.
  • Learn disk management — lsblk, df, du, mount, fstabYou have 8TB of HDDs. Make sure they mount on boot and you can manage them properly.
// Core Stack to Deploy (on Ubuntu, before Proxmox)
  • Deploy Nginx Proxy ManagerYour reverse proxy. All services will sit behind this. Handles SSL certs automatically via Let's Encrypt.
  • Deploy Pi-hole or AdGuard Home for local DNSNow your services get real hostnames like audiobookshelf.local instead of 192.168.x.x:port
  • Deploy Uptime Kuma for monitoringSet up alerts for all your services. First time something goes down and you get notified before you notice — that's the moment monitoring clicks.
  • Deploy WireGuard VPNSo you can securely reach your homelab from your laptop remotely. Use wg-easy for a simpler setup initially.
  • Set up a basic dashboard (Homarr or Heimdall)One page with links to all your services. Looks good, forces you to organize what you're running.
  • Start a documentation repo (Git + Markdown)Architecture diagram, service list, network map, what broke and how you fixed it. This is non-negotiable.
// Exit Checkpoints — don't move to Phase 1 until all green
Can recover a broken container from scratch without Googling the basics
All services survive a full system reboot automatically
Backups are automated and you've tested restoring from them
You can reach your homelab from your laptop via WireGuard
You can explain your entire network setup to someone without checking anything
Documentation repo exists with at least a network map and service list
You stop fearing logs — they're your first debugging instinct
↓ proxmox unlocks here ↓
Phase 1
Infrastructure Mindset
Timeline: 2–3 months · Requires: Phase 0 checkpoints done
+
This is where you stop being a person who uses computers and start being a person who builds systems. Proxmox replaces your VirtualBox. Your desktop stops being "Ubuntu with some VMs" and becomes a dedicated virtualization node. You'll build proper network segmentation, migrate your existing services, and start thinking in terms of infrastructure rather than individual apps.
// Proxmox Installation
  • Get a second SSD (even 120GB is enough) and install Proxmox on itDual boot setup — Ubuntu for daily work, Proxmox for homelab. Don't nuke your daily driver yet.
  • Access Proxmox web UI at https://your-ip:8006Get comfortable with the interface before touching anything else.
  • Understand the difference between VMs and LXC containers in ProxmoxLXC = lightweight, shares kernel, great for services. VM = full isolation, needed for Windows, pfSense, etc.
  • Create VM templates for Ubuntu Server and DebianClone from template instead of reinstalling every time. This is the first taste of infrastructure efficiency.
  • Learn snapshots — create, restore, deleteSnapshot before every major change. Restore when you break something. You will break something.
  • Configure Proxmox storage — local, local-lvm, and add your HDDs as storageYou have 8TB of HDDs. Add them as a directory storage pool for VM disks and backups.
  • Set up automated Proxmox backups (PBS or built-in backup)Schedule nightly VM backups to your HDD storage. If it's not backed up it doesn't exist.
// Network Segmentation with VLANs
  • Understand VLANs conceptually — tagged vs untagged, trunks, access portsDraw it on paper before configuring it. What traffic goes where and why.
  • Create Linux bridges in Proxmox for network separationvmbr0 for management, vmbr1 for services, vmbr2 for lab/experimental traffic.
  • Deploy pfSense or OPNsense as a VM — your virtual router/firewallThis is the backbone of proper network segmentation. All inter-VLAN traffic routes through here.
  • Configure firewall rules between VLANsManagement VLAN can reach Services VLAN. Lab VLAN is isolated from everything. Understand why each rule exists.
  • Document your network topology as a diagramDraw.io or Excalidraw. Every VLAN, every VM, every connection. Keep it updated.
// Service Migration into Proxmox
  • Migrate your Docker stack into a dedicated Docker VM inside ProxmoxDon't mix everything in one VM. Docker services live in the Docker VM, DNS in DNS VM, etc.
  • Migrate Pi-hole/AdGuard into its own LXC containerDNS should be lightweight and always-on. LXC is perfect for this.
  • Move your SOC lab (Wazuh, Kali, Metasploitable) into Proxmox VMsPut them on your isolated lab VLAN. Now your SOC lab has real network isolation.
  • Set up WireGuard VPN inside ProxmoxNow your VPN gateway lives in the infrastructure layer, not on your Ubuntu desktop.
// Exit Checkpoints
Can spin up a new VM from template in under 5 minutes
VLAN segmentation is working — lab traffic can't reach services traffic
All previous services running inside Proxmox, nothing left on bare Ubuntu
Can recover a failed VM from snapshot or backup
Network topology diagram is accurate and up to date
You think in VMs/containers, not in "apps on my computer"
↓ monitoring + automation unlocks here ↓
Phase 2
Monitoring + Automation
Timeline: 2–3 months · Requires: Phase 1 checkpoints done
+
By now your infrastructure exists and works. But you're probably doing the same manual steps repeatedly, and you have no visibility into what's actually happening inside your systems. This phase fixes both. You'll add proper observability so you can see everything, and start automating the repetitive stuff so you stop being the manual layer in your own infrastructure.
// Monitoring Stack
  • Deploy Prometheus for metrics collectionScrapes metrics from your VMs, containers, and services on a schedule.
  • Deploy Node Exporter on every VMExposes CPU, RAM, disk, network metrics to Prometheus. One per host.
  • Deploy Grafana and connect it to PrometheusBuild a dashboard showing all your VMs' resource usage. Understand what normal looks like.
  • Add Loki for log aggregationShip logs from all your containers and VMs into one place. Never SSH into 5 machines to find a log again.
  • Set up alerting rules in GrafanaAlert when disk is above 80%, when a service goes down, when RAM spikes. Observability means nothing without alerts.
  • Add Proxmox metrics to Grafana via the Proxmox exporterSee VM CPU/RAM usage directly in your dashboard alongside everything else.
// Automation Basics
  • Write bash scripts for repetitive tasks — VM setup, backup verification, service checksEvery time you do the same thing manually twice, script it. No exceptions.
  • Learn Ansible basics — inventory, playbooks, tasks, handlersStart simple: a playbook that installs Docker and deploys your stack on a fresh VM.
  • Write an Ansible playbook to provision your standard Ubuntu Server VMSSH hardening, UFW setup, Docker install, user config. Run it on every new VM instead of doing it manually.
  • Version control all your configs in GitDocker Compose files, Ansible playbooks, Nginx configs — all in Git. Treat your infra like code.
  • Deploy Gitea — self-hosted Git serverYour configs live on YOUR infrastructure, not GitHub. Also good practice for running a Git server.
// SOC Lab Expansion (cybersec secondary)
  • Configure Wazuh to monitor traffic across your VLANsNow your SIEM has meaningful segmented network traffic to analyze, not just one flat network.
  • Run intentional attack simulations and verify Wazuh catches themFTP exploit, port scan, brute force — generate known traffic, confirm detection. Close the loop.
  • Write custom Wazuh detection rules for your specific lab trafficGeneric rules are a starting point. Custom rules show you actually understand what you're detecting.
// Exit Checkpoints
Grafana dashboard shows all VMs' health at a glance
You find out about problems from alerts, not by randomly noticing
New VM provisioning is done with Ansible, not manual steps
All configs are in Git — nothing important lives only on a server
Repetitive tasks annoy you enough that you always script them
↓ advanced networking + devops ↓
Phase 3
Advanced Networking + DevOps
Timeline: 3–4 months · Requires: Phase 2 checkpoints done
+
This is where junior infrastructure engineer mindset starts forming. You'll go deeper into networking — the thing you actually care about — and start touching DevOps workflows properly. CI/CD, self-hosted pipelines, certificate management, advanced routing. By the end of this phase your homelab is genuinely impressive and your skill set is starting to align with real job requirements.
// Advanced Networking
  • Replace Nginx Proxy Manager with TraefikMore powerful, config-as-code, dynamic routing. Harder to set up, which is the point.
  • Set up internal PKI with Step-CA or similarIssue your own SSL certificates for internal services. Understand the certificate chain.
  • Configure BGP or OSPF basics inside your lab with pfSenseDynamic routing protocols. This is where packet tracer knowledge meets real config.
  • Implement IDS/IPS with Suricata inside OPNsenseReal-time traffic inspection on your network. Connects networking with your cybersec interest.
  • Set up a proper DNS split-horizon setupInternal DNS resolves to private IPs, external DNS to public. Standard enterprise pattern.
// DevOps Workflows
  • Set up GitHub Actions or Gitea Actions for a CI/CD pipelineAuto-build and deploy a simple app when you push code. The "push to deploy" moment is when CI/CD clicks.
  • Deploy self-hosted runners for your CI/CD inside ProxmoxYour pipeline runs on YOUR infrastructure. Not GitHub's servers.
  • Build a simple app (even a personal project) with full CI/CD pipelineCode push → tests run → Docker image built → deployed to your homelab. End to end.
  • Learn Terraform basics — provision a VM with code, not UI clicksProxmox has a Terraform provider. Define your infrastructure in HCL, apply it, see the VM appear.
  • Implement GitOps mindset — infra changes go through Git, not direct editsNo SSH-ing into servers and editing configs manually. Change in Git, pipeline applies it.
// Exit Checkpoints
Full CI/CD pipeline working — push code, watch it deploy automatically
Traefik routing working with internal SSL certs
Can explain your network topology including routing decisions to someone else
Infrastructure changes go through code, not manual CLI
You're describing your setup with terms like "pipeline", "IaC", "segmentation" naturally
↓ local AI integration ↓
Phase 4
AI Integration + MLOps
Timeline: Ongoing · Requires: Phase 3 stable
+
NOW your AI interest becomes powerful instead of gimmicky. Because now you have real infrastructure to deploy it INTO. You're not just downloading a model and chatting with it — you're building AI-powered systems that integrate with your existing stack. Your RTX 2060 Super with CUDA can run 7B models comfortably. Use it.
// Local Inference Stack
  • Set up CUDA drivers and verify GPU acceleration is workingnvidia-smi should show your GPU. This is the foundation for everything GPU-accelerated.
  • Deploy Ollama in a Docker container with GPU passthroughGPU passthrough in Proxmox is a project in itself. Worth the effort.
  • Deploy Open WebUI connected to OllamaYour own local ChatGPT interface running on your hardware. No API costs.
  • Run and compare: Llama 3 8B, Mistral 7B, Qwen 7BUnderstand model differences. Coding tasks, reasoning, speed vs quality tradeoffs.
  • Build a RAG pipeline over your documentation repoAsk your local AI about your own infrastructure docs. Practical and impressive.
// AI-Powered Infra Projects
  • Build a log analysis assistant using your Loki logs + LLMFeed logs to an LLM, get plain English explanations of what's happening. Networking + AI = MLOps.
  • Set up a self-hosted coding assistant (Continue.dev or similar)Local Copilot in your editor. Backed by your Ollama instance.
  • Experiment with embeddings and vector databases (Qdrant or Chroma)The backbone of RAG systems. Understand semantic search vs keyword search.
// You've arrived when...
Local LLM is integrated INTO your infrastructure, not running as a standalone toy
You can describe the difference between inference, RAG, and fine-tuning
AI is a tool in your stack, not the point of your stack
// traps to avoid at all costs //
// Do Not Touch Until Phase 3+ Is Done
Kubernetes — learn Docker and Linux deeply first or you'll just memorize YAML without understanding systems
Multi-node clustering — you need operational maturity with a single node before scaling
Cloud (AWS/Azure) heavy focus — local infra pain points teach you WHY cloud exists. Learn local first
Terraform before Ansible — Ansible is simpler, more immediately useful. Don't skip it
Collecting certifications before building things — certs after skills, not instead of skills
Progress saved