Building a Real Web Infrastructure: My First Proxmox Project
From Theory to Reality
The Assignment: Build a complete web infrastructure for GSB (Galaxy Swiss Bourdin) using Proxmox virtualization. Two containers. Network configuration. Security hardening. SSH access for developers.
Translation: Build something that actually works in the real world.
The reality? This is what separates "I can follow a tutorial" from "I can architect infrastructure."
As a first-year BTS SIO SISR student, this was my first taste of real system administration. Not clicking through a GUI. Not copying commands blindly. Actually understanding what I was building and why.
Here's what I learned.
The Mission: GSB Web Infrastructure
Context: GSB is a fictional pharmaceutical company (classic French IT teaching scenario). They need a web application to manage their management fees. SLAM students develop the PHP application. SISR students (that's me) build and secure the infrastructure to run it.
Simple, right? Narrator: It wasn't.
📋 The Requirements
- Web Server Container: Debian with Apache and PHP
- Database Container: Ubuntu with MariaDB
- Network: Containers must communicate securely
- Security: Firewall rules, SSH hardening, intrusion prevention
- Access: SSH for developers to deploy code
- Documentation: Everything must be reproducible
The Architecture
🏗️ Infrastructure Overview
VM/Container Manager
Resource Allocation
Apache 2.4
PHP 8.4
Port 80/22
MariaDB 10.11
Port 3306
Internal network only
The Tech Stack
Intrusion prevention system. Monitors logs, detects attacks, bans IPs automatically. Your silent guardian.
Phase 1: Container Creation
First step: spin up the containers. Proxmox makes this deceptively easy.
Create Web Server Container
Proxmox UI → Create CT → Debian 12 template → Assign resources (2GB RAM, 20GB disk, 2 cores)
# Container specs
Hostname: gsb-web
OS: Debian 12 (Bookworm)
RAM: 2048 MB
Disk: 20 GB
CPU: 2 cores
Network: vmbr1 (bridged)
Create Database Container
Same process, different OS and network config.
# Container specs
Hostname: gsb-db
OS: Ubuntu 22.04 LTS
RAM: 2048 MB
Disk: 30 GB
CPU: 2 cores
Network: vmbr1 (bridged)
Initial Configuration
Start both containers, update packages, set proper hostnames and timezone.
# On both containers
apt update && apt upgrade -y
hostnamectl set-hostname [gsb-web/gsb-db]
timedatectl set-timezone Europe/Paris
apt install sudo vim curl wget -y
First Challenge: Containers didn't have internet access after creation. Network bridge was misconfigured.
Solution: Check /etc/network/interfaces on Proxmox host, ensure bridge has correct gateway. Restart networking service. Containers came online.
Lesson: Always verify network connectivity before installing anything.
Phase 2: Web Server Setup
Time to build the web stack. Apache, PHP, and all the necessary modules.
# Install Apache and PHP
apt install apache2 -y
apt install php8.4 php8.4-fpm php8.4-mysql php8.4-xml php8.4-mbstring -y
# Enable PHP-FPM with Apache
a2enmod proxy_fcgi setenvif
a2enconf php8.2-fpm
# Enable required Apache modules
a2enmod rewrite headers ssl
# Restart Apache
systemctl restart apache2
systemctl enable apache2
# Verify installation
systemctl status apache2
php -v
🔧 Apache Configuration
Created a proper VirtualHost for the GSB application:
# /etc/apache2/sites-available/gsb.conf
<VirtualHost *:80>
ServerName gsb.local
DocumentRoot /var/www/gsb
<Directory /var/www/gsb>
Options -Indexes +FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/gsb_error.log
CustomLog ${APACHE_LOG_DIR}/gsb_access.log combined
</VirtualHost>
# Enable site and restart Apache
a2ensite gsb.conf
systemctl reload apache2
# Create web directory with proper permissions
mkdir -p /var/www/gsb
chown -R www-data:www-data /var/www/gsb
chmod -R 755 /var/www/gsb
Phase 3: Database Server Setup
MariaDB time. Install, secure, configure for remote access.
# Install MariaDB
apt install mariadb-server mariadb-client -y
# Secure installation
mysql_secure_installation
# - Set root password: YES
# - Remove anonymous users: YES
# - Disallow root login remotely: YES
# - Remove test database: YES
# - Reload privilege tables: YES
# Start and enable service
systemctl start mariadb
systemctl enable mariadb
🗄️ Database & User Creation
# Connect to MariaDB
mysql -u root -p
# Create database and user
CREATE DATABASE gsb_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'gsb_user'@'%' IDENTIFIED BY 'StrongPassword123!';
GRANT ALL PRIVILEGES ON gsb_db.* TO 'gsb_user'@'%';
FLUSH PRIVILEGES;
EXIT;
Key decision: User created with 'gsb_user'@'%' to allow connections from any IP (within our internal network). In production, you'd restrict this to specific IPs.
⚙️ Configure Remote Access
# Edit MariaDB config
vim /etc/mysql/mariadb.conf.d/50-server.cnf
# Change bind-address from 127.0.0.1 to 0.0.0.0
bind-address = 0.0.0.0
# Restart MariaDB
systemctl restart mariadb
✅ Test Database Connection
From the web container, verify database connectivity:
# Install MySQL client on web container
apt install mariadb-client -y
# Test connection (replace DB_CONTAINER_IP)
mysql -h DB_CONTAINER_IP -u gsb_user -p gsb_db
# If connection succeeds, you're good!
Success! Web container can now communicate with database container. The infrastructure backbone is working.
Phase 4: Security Hardening
Now comes the critical part: making this infrastructure secure. An exposed system without proper security is a liability, not an asset.
🔥 Firewall Configuration (UFW)
Installed and configured UFW on both containers:
# Install UFW
apt install ufw -y
# Web Container - Allow HTTP and SSH only
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP
ufw enable
# Database Container - Allow MySQL and SSH only
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 3306/tcp # MariaDB
ufw enable
# Verify rules
ufw status verbose
Mistake I Made: Enabled UFW before allowing SSH. Locked myself out of the container. Had to use Proxmox console to recover.
Golden Rule: ALWAYS configure SSH rules BEFORE enabling the firewall.
🔐 SSH Hardening
Default SSH configuration is convenient but insecure. Time to fix that.
# Edit SSH config
vim /etc/ssh/sshd_config
# Security improvements:
PermitRootLogin no # Disable root login
PasswordAuthentication yes # Keep for now (use keys later)
PubkeyAuthentication yes # Enable key-based auth
Port 22 # Default (could change for obscurity)
MaxAuthTries 3 # Limit login attempts
ClientAliveInterval 300 # Timeout idle sessions
ClientAliveCountMax 2
X11Forwarding no # Disable X11 (not needed)
AllowUsers your_username # Whitelist allowed users
# Restart SSH
systemctl restart sshd
🛡️ Fail2ban - Intrusion Prevention
Fail2ban monitors logs and automatically bans IPs after repeated failed login attempts. Essential for any internet-facing system.
# Install Fail2ban
apt install fail2ban -y
# Create local config (don't edit jail.conf directly)
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
vim /etc/fail2ban/jail.local
# Key configuration:
[DEFAULT]
bantime = 1h # Ban duration
findtime = 10m # Time window for failures
maxretry = 3 # Max attempts before ban
[sshd]
enabled = true
port = 22
logpath = /var/log/auth.log
# Start and enable service
systemctl start fail2ban
systemctl enable fail2ban
# Check status
fail2ban-client status
fail2ban-client status sshd
Security Checklist ✓
- Firewall configured and active (UFW)
- Only necessary ports open (22, 80, 3306)
- SSH hardened (no root login, limited attempts)
- Fail2ban monitoring SSH and blocking attackers
- Database not exposed to public internet
- Strong passwords on all accounts
- Regular security updates enabled
Phase 5: SFTP Access for Developers
SLAM students need to deploy their PHP code. SFTP provides secure file transfer over SSH.
👥 Create Developer User
# Create user for SLAM team
useradd -m -s /bin/bash slamdev
passwd slamdev
# Add user to www-data group for web access
usermod -aG www-data slamdev
# Set web directory as home for SFTP
usermod -d /var/www/gsb slamdev
chown -R slamdev:www-data /var/www/gsb
chmod -R 775 /var/www/gsb
⚙️ Configure SFTP Chroot
Restrict SFTP users to their home directory for security:
# Edit SSH config
vim /etc/ssh/sshd_config
# Set web directory as home for SFTP
usermod -d /var/www/gsb slamdev
chown -R slamdev:www-data /var/www/gsb
chmod -R 775 /var/www/gsb
✅ Test SFTP Connection
From a developer machine:
# Connect via SFTP
sftp slamdev@WEB_CONTAINER_IP
# Upload files
put index.php
put -r app/
# Verify in browser
http://WEB_CONTAINER_IP/index.php
Testing & Validation
Built infrastructure is useless if it doesn't work. Time to test everything.
Apache serves pages correctly. PHP executes. Error logs clean. Performance acceptable.
Web container connects to database. Queries execute. No timeout issues. Proper authentication.
Firewall blocks unauthorized access. SSH rejects weak attempts. Fail2ban bans after 3 failures.
Developers can upload files. Permissions correct. Chroot jail works. No access outside web directory.
What I Learned
LXC containers are lighter than VMs but just as isolated. Perfect for web infrastructure. Boot in seconds, not minutes.
Every exposed port is a potential attack vector. Default configurations are rarely secure. Defense in depth matters.
Write down every command. Note every config change. Future you (and your team) will thank you.
Things WILL break. Logs are your best friend. Google is your second-best friend. Patience is essential.
Understanding TCP/IP, ports, and routing is critical. You can't secure what you don't understand.
Web and database on separate containers = better security, easier maintenance, clearer architecture.
Mistakes I Made (So You Don't Have To)
Mistake #1: Enabled UFW before configuring SSH rules → Locked out of container
Fix: Used Proxmox console to disable firewall. Reconfigured properly.
Mistake #2: Forgot to configure MariaDB for remote access → Web container couldn't connect
Fix: Changed bind-address to 0.0.0.0 in MariaDB config.
Mistake #3: Wrong permissions on /var/www/gsb → Apache couldn't read files
Fix: chown -R www-data:www-data /var/www/gsb and proper chmod.
Mistake #4: Didn't test SFTP chroot → Developers could access entire filesystem
Fix: Properly configured ChrootDirectory in sshd_config.
The Bigger Picture
This project wasn't just about "installing Apache and MariaDB." It was about:
- Architecture: Designing a system that makes sense
- Security: Protecting infrastructure from real threats
- Collaboration: Building something others depend on
- Troubleshooting: Solving problems independently
- Documentation: Making work reproducible and maintainable
This is what real system administration looks like. Not perfect the first time. Not without mistakes. But functional, secure, and professional.
What's Next?
This infrastructure is production-ready for a school environment, but there's always room for improvement:
Add Let's Encrypt certificates for encrypted traffic. No excuse for HTTP-only in 2026.
Set up Prometheus + Grafana for performance monitoring and alerting.
Schedule daily database dumps and weekly full container snapshots.
Automate deployment with GitLab CI or GitHub Actions. Push code, auto-deploy.
The Bottom Line
Building real infrastructure is challenging. It's frustrating. It's time-consuming. And it's absolutely essential for any serious IT professional.
You can watch 100 hours of tutorials. You can read every book on Linux administration. But until you actually build something from scratch, troubleshoot real problems, and secure a live system—you don't really know it.
Theory teaches you what to do.
Practice teaches you why.
Mistakes teach you how.
Want to Try This Yourself?
Download Proxmox, spin up some containers, and start building. Break things. Fix them. Learn.
The best way to learn system administration is to administer systems.
Resources: Proxmox Downloads | Proxmox Documentation | LXC Documentation
Questions about the setup? Hit a roadblock on your own project? Drop a comment—I'll help if I can.
Build. Break. Fix. Repeat. 🔧
class="tech-icon">📦Open-source virtualization platform. Think VMware, but free and better. Manages containers (LXC) and VMs (QEMU/KVM).
Lightweight Linux containers. Faster than VMs, isolated like VMs. Perfect for web infrastructure.
The classic web stack. Apache serves pages, PHP executes server-side code. Battle-tested and reliable.
MySQL fork. Stores application data. More open, faster, and better maintained than Oracle's MySQL.
Uncomplicated Firewall. Frontend for iptables. Blocks unwanted traffic, allows only what we need.
Système de prévention d'intrusion. Analyse les logs, détecte les attaques par force brute et bannit les IPs automatiquement. Votre gardien silencieux.
Phase 4 : Sécurisation et Accès Distants
Une fois les services installés, la priorité absolue était de verrouiller l'infrastructure tout en permettant le travail collaboratif avec les développeurs SLAM.
Configuration du Pare-feu (UFW)
Mise en place d'une politique "Deny All" par défaut, en autorisant uniquement les ports nécessaires.
# Autorisation des flux web et administration
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP
ufw enable
Passerelle SISR - SLAM (SFTP)
Création d'utilisateurs dédiés pour les développeurs avec un accès sécurisé par SSH/SFTP directement dans le répertoire web.
# Création d'accès pour les SLAM
useradd -m -g www-data slam_user
passwd slam_user
# Isolation (Chroot) activée dans sshd_config
Résultat final : L'infrastructure est fonctionnelle et sécurisée. Les SLAM peuvent déposer leur code PHP via SFTP, Apache le traite, et MariaDB stocke les données sur un conteneur isolé. Mission accomplie !
Bilan de l'atelier
Ce TP a permis de comprendre l'importance de la segmentation des services et de la collaboration entre administrateurs et développeurs.
Prochaine étape : Mise en place d'un certificat SSL avec Let's Encrypt !