diff --git a/2026/day-01/README.md b/2026/day-01/README.md deleted file mode 100644 index 118d4e7f4..000000000 --- a/2026/day-01/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# Day 01 – Introduction to DevOps and Cloud - -## Task -Today’s goal is to **set the foundation for your DevOps journey**. - -You will create a **90-day personal DevOps learning plan** that clearly defines: -- What is your understanding of DevOps and Cloud Engineering? -- Why you are starting learning DevOps & Cloud? -- Where do you want to reach? -- How you will stay consistent every single day? - -This is not a generic plan. -This is your **career execution blueprint** for the next 90 days. - ---- - -## Expected Output -By the end of today, you should have: - -- A markdown file named: - `learning-plan.md` - -or - -- A hand written plan for the next 90 Days (Recommended) - - -The file/note should clearly reflect your intent, discipline, and seriousness toward becoming a DevOps engineer. - ---- - -## Guidelines -Follow these rules while creating your plan: - -- Mention your **current level** - (student / fresher / working professional / non-IT background, etc.) -- Define **3 clear goals** for the next 90 days - (example: deploy a production-grade application on Kubernetes) -- Define **3 core DevOps skills** you want to build - (example: Linux troubleshooting, CI/CD pipelines, Kubernetes debugging) -- Allocate a **weekly time budget** - (example: 2–2.5 hours per day on weekdays, 4-6 hours weekends) -- Keep the document **under 1 page** -- Be honest and realistic; consistency matters more than perfection - ---- - -## Resources -You may refer to: - -- TrainWithShubham [course curriculum](https://english.trainwithshubham.com/JOSH_BATCH_10_Syllabus_v1.pdf) -- TrainWithShubham DevOps [roadmap](https://docs.google.com/spreadsheets/d/1eE-NhZQFr545LkP4QNhTgXcZTtkMFeEPNyVXAflXia0/edit?gid=2073716385#gid=2073716385) -- Your own past experience and career aspirations - -Avoid over-researching today. The focus is **clarity**, not depth. - ---- - -## Why This Matters for DevOps -DevOps engineers succeed not just because of tools, but because of: - -- Discipline -- Ownership -- Long-term thinking -- Ability to execute consistently - -In real jobs, no one tells you exactly what to do every day. -This task trains you to **take ownership of your own growth**, just like a real DevOps engineer. - -A clear plan: -- Reduces confusion -- Prevents burnout -- Keeps you focused during tough days - ---- - -## Submission -1. Fork this `90DaysOfDevOps` repository -2. Navigate to the `2026/day-01/` folder -3. Add your `learning-plan.md` file -4. Commit and push your changes to your fork - ---- - -## Learn in Public -Share your Day 01 progress on LinkedIn: - -- Post 2–3 lines on why you’re starting **#90DaysOfDevOps** -- Share one goal from your learning plan -- Optional: screenshot of your markdown file or a professional picture - -Use hashtags: -#90DaysOfDevOps -#DevOpsKaJosh -#TrainWithShubham - - -Happy Learning -**TrainWithShubham** \ No newline at end of file diff --git a/2026/day-01/learning_plan.md b/2026/day-01/learning_plan.md new file mode 100644 index 000000000..357ae471a --- /dev/null +++ b/2026/day-01/learning_plan.md @@ -0,0 +1,64 @@ +# 90-Day DevOps Learning Plan + +## Current Level +Working professional with DevOps engineering experience. + +--- + +## Understanding of DevOps & Cloud Engineering + +**DevOps** is a culture and practice that bridges development and operations through automation, collaboration, and continuous improvement. It focuses on shortening development cycles, increasing deployment frequency, and ensuring reliable releases through CI/CD, IaC, monitoring, and feedback loops. + +**Cloud Engineering** involves designing, deploying, and managing scalable infrastructure on cloud platforms (AWS, Azure, GCP), leveraging services like compute, storage, networking, and managed solutions to build resilient, cost-effective systems. + +--- + +## Why I'm Learning DevOps & Cloud + +- To stay current with evolving tools and best practices in the DevOps ecosystem +- To build production-grade expertise that delivers measurable business value +- To transition from operational tasks to strategic infrastructure design and optimization +- To strengthen my ability to architect cloud-native solutions at scale + +--- + +## Where I Want to Reach (90-Day Goals) + +1. **Deploy a production-grade microservices application on Kubernetes** with auto-scaling, monitoring, and GitOps-based deployments +2. **Build and maintain a fully automated CI/CD pipeline** using Jenkins/GitLab CI with security scanning, testing, and blue-green deployments +3. **Achieve proficiency in Infrastructure as Code** by managing multi-environment cloud infrastructure using Terraform and Ansible + +--- + +## Core DevOps Skills to Build + +1. **Advanced Kubernetes Management** - Pod troubleshooting, networking, security policies, Helm charts, and cluster optimization +2. **CI/CD Pipeline Mastery** - Building robust pipelines with automated testing, security gates, artifact management, and deployment strategies +3. **Observability & Incident Response** - Setting up comprehensive monitoring, logging, alerting with Prometheus/Grafana/ELK, and conducting RCA + +--- + +## Weekly Time Budget + +- **Weekdays (Mon-Fri):** 2-2.5 hours daily (hands-on labs, documentation, learning) +- **Weekends (Sat-Sun):** 4-6 hours per day (projects, debugging, writing blog posts) +- **Total weekly commitment:** 18-24 hours + +--- + +## Consistency Strategy + +- **Daily standup with myself:** 10-minute review each morning of what I'll learn today +- **Public accountability:** Share progress on LinkedIn/GitHub weekly +- **Hands-on first:** Build, break, fix - no passive watching without implementation +- **Document everything:** Maintain daily notes in a GitHub repo to track learnings and blockers +- **No zero days:** Even 30 minutes counts; consistency beats intensity +- **Weekend projects:** Apply weekly learnings to real-world scenarios every Saturday/Sunday + +--- + +**Start Date:** [Today's Date] +**End Date:** [90 Days from Today] +**Tracking:** GitHub repository with daily commit streak + +*This is my commitment to myself. 90 days of focused execution.* diff --git a/2026/day-02/linux-architecture-notes.md b/2026/day-02/linux-architecture-notes.md new file mode 100644 index 000000000..98d8e3749 --- /dev/null +++ b/2026/day-02/linux-architecture-notes.md @@ -0,0 +1,292 @@ +# Day 02 – Linux Architecture, Processes, and systemd + +## What We'll Learn Today +Understanding how Linux works is like learning how a car engine works before you start driving. It helps you fix problems faster and make better decisions as a DevOps engineer! + +--- + +## Linux Architecture: The Big Picture + +Think of Linux as a well-organized building with different floors: + +``` +┌─────────────────────────────────────┐ +│ Applications & User Programs │ (What you interact with) +│ (Firefox, Docker, VS Code, etc.) │ +├─────────────────────────────────────┤ +│ User Space │ (Where programs run) +│ (Libraries, System Tools) │ +├─────────────────────────────────────┤ +│ System Calls (Interface) │ (Communication bridge) +├─────────────────────────────────────┤ +│ Linux Kernel │ (The brain of the system) +│ (Process, Memory, Device Manager) │ +├─────────────────────────────────────┤ +│ Hardware │ (Physical components) +│ (CPU, RAM, Disk, Network) │ +└─────────────────────────────────────┘ +``` + +--- + +## Core Components of Linux + +### 1️The Linux Kernel (The Brain) + +**What is it?** +The kernel is the core of the operating system. It's like the manager of a restaurant who coordinates everything. + +**What does it do?** +- **Process Management**: Decides which program gets to use the CPU and when +- **Memory Management**: Allocates RAM to programs and makes sure they don't interfere with each other +- **Device Management**: Talks to your hardware (keyboard, mouse, disk, network card) +- **File System Management**: Organizes how files are stored and retrieved +- **Security**: Controls who can access what + +**Simple Analogy:** +Think of the kernel as a traffic controller at a busy intersection, making sure all cars (programs) move smoothly without crashing into each other. + +--- + +### 2️User Space (Where We Live) + +**What is it?** +User space is where all your applications and programs run. This is separated from the kernel for safety. + +**Why the separation?** +If a program crashes in user space, it won't bring down the entire system. The kernel stays protected! + +**What lives here?** +- Applications (browsers, text editors, Docker) +- System utilities (ls, cat, grep) +- Libraries (code that programs share) +- Your shell (bash, zsh) + +**Simple Analogy:** +User space is like the dining area of a restaurant. Customers (programs) can eat here, but they can't go into the kitchen (kernel) and mess with the stove! + +--- + +### 3️Init System / systemd (The Startup Manager) + +**What is Init?** +Init is the **first process** that starts when Linux boots up. It's like the opening manager of a store who turns on all the lights and gets everything ready. + +**What is systemd?** +systemd is the modern init system used by most Linux distributions today. It replaced the older "SysV init" system. + +**Why does it matter?** +- Starts and stops services (like web servers, databases) +- Manages dependencies (starts things in the right order) +- Monitors services and restarts them if they crash +- Handles system logging +- Much faster boot times than old init systems + +**Simple Analogy:** +systemd is like a stage manager at a theater. It makes sure all actors (services) come on stage at the right time, in the right order, and if someone misses their cue, it gets them back on stage! + +--- + +## How Processes Work in Linux + +### What is a Process? + +A process is simply a **program that's running**. When you double-click an app, you create a process. + +**Key Points:** +- Every process has a unique ID called **PID** (Process ID) +- The first process (systemd) has PID 1 +- Every process (except PID 1) has a parent process (PPID) + +### Process Lifecycle + +``` +1. Creation (Fork) + ↓ +2. Execution (Exec) + ↓ +3. Running + ↓ +4. Waiting/Sleeping (if needed) + ↓ +5. Termination +``` + +### How Are Processes Created? + +Linux creates new processes using two system calls: + +**1. fork()** – Makes a copy of the current process +- The parent process creates a child process +- The child is almost identical to the parent + +**2. exec()** – Replaces the child process with a new program +- After forking, the child calls exec() to run a different program + +**Simple Example:** +``` +When you type "ls" in your terminal: +1. Your shell (bash) forks itself +2. The child process calls exec(ls) +3. Now the child is running "ls" command +4. When done, the child exits +5. The parent (bash) continues +``` + +--- + +## Process States + +A process can be in different states: + +| State | Symbol | What It Means | +|-------|--------|---------------| +| **Running** | R | Currently using the CPU | +| **Sleeping** | S | Waiting for something (like user input) | +| **Stopped** | T | Paused (you can resume it) | +| **Zombie** | Z | Finished but parent hasn't collected it yet | +| **Dead** | X | Completely terminated | + +**Check process states:** +```bash +ps aux +``` + +--- + +## systemd Deep Dive + +### Why systemd Matters for DevOps + +As a DevOps engineer, you'll use systemd **every single day** to: +- Start/stop services (nginx, docker, databases) +- Check service status +- View logs +- Set services to start on boot +- Troubleshoot why services failed + +### Key systemd Concepts + +**1. Units** +Everything in systemd is a "unit". Types include: +- `.service` – Services (nginx, docker) +- `.socket` – Network sockets +- `.timer` – Scheduled tasks (like cron) +- `.mount` – File systems +- `.target` – Groups of units + +**2. Unit Files** +Configuration files that describe how to manage a service. + +**Location:** +``` +/etc/systemd/system/ (custom services) +/lib/systemd/system/ (system services) +``` + +### Essential systemd Commands + +```bash +# Start a service +sudo systemctl start nginx + +# Stop a service +sudo systemctl stop nginx + +# Restart a service +sudo systemctl restart nginx + +# Check status +sudo systemctl status nginx + +# Enable (start on boot) +sudo systemctl enable nginx + +# Disable (don't start on boot) +sudo systemctl disable nginx + +# View logs for a service +sudo journalctl -u nginx + +# View real-time logs +sudo journalctl -u nginx -f + +# List all running services +systemctl list-units --type=service --state=running + +# Check if a service failed +systemctl is-failed nginx +``` + +--- + +## Practical Examples for DevOps + +### Example 1: Check if Docker is Running + +```bash +systemctl status docker +``` + +If it's not running: +```bash +sudo systemctl start docker +sudo systemctl enable docker # Start on boot +``` +--- + +### Example 2: View Service Logs + +```bash +# Last 100 lines +journalctl -u docker -n 100 + +# Real-time logs (like tail -f) +journalctl -u docker -f + +# Logs from today +journalctl -u docker --since today +``` + +## Key Takeaways for DevOps Engineers + +1. **Linux has layers**: Hardware → Kernel → User Space → Applications +2. **The kernel manages everything**: processes, memory, devices, files +3. **Processes are created using fork() and exec()** +4. **systemd is your service manager**: Start, stop, monitor, and troubleshoot services +5. **Learn systemctl and journalctl**: These are your daily tools for managing services + + +--- + +## Quick Reference Cheat Sheet + +### Process Commands +```bash +ps aux # List all processes +ps -ef # Another format +top # Real-time process viewer +htop # Better top (needs installation) +kill # Stop a process +kill -9 # Force kill +pgrep # Find PID by name +pkill # Kill by name +``` + +### systemd Commands +```bash +systemctl start +systemctl stop +systemctl restart +systemctl status +systemctl enable +systemctl disable +systemctl list-units --type=service +journalctl -u +journalctl -u -f +systemctl daemon-reload +``` + +--- + +*The best way to learn is by doing it.* \ No newline at end of file diff --git a/2026/day-03/linux-commands-cheatsheet.md b/2026/day-03/linux-commands-cheatsheet.md new file mode 100644 index 000000000..85820fc0e --- /dev/null +++ b/2026/day-03/linux-commands-cheatsheet.md @@ -0,0 +1,280 @@ +# Day-03: Linux Commands Cheat Sheet + +## Process Management + +### ps - Process Status +```bash +ps aux # List all processes with detailed info (a=all users, u=user-oriented, x=include non-terminal) +ps -ef # Full format listing (alternative to aux) +ps aux | grep nginx # Find specific process +ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head # Show top memory consumers +``` + +### top/htop - Real-time Process Monitoring +```bash +top # Interactive process viewer +top -u username # Show processes for specific user +htop # Enhanced interactive process viewer (if installed) +``` + +### kill - Terminate Processes +```bash +kill -9 PID # Force kill process (SIGKILL) +kill -15 PID # Graceful termination (SIGTERM) - default +killall nginx # Kill all processes by name +pkill -f "pattern" # Kill processes matching pattern +``` + +### systemctl - Service Management +```bash +systemctl status nginx # Check service status +systemctl start nginx # Start service +systemctl stop nginx # Stop service +systemctl restart nginx # Restart service +systemctl reload nginx # Reload configuration without restart +systemctl enable nginx # Enable service on boot +systemctl disable nginx # Disable service on boot +systemctl list-units --type=service --state=running # List running services +``` + +### journalctl - System Logs +```bash +journalctl -u nginx # Show logs for specific service +journalctl -f # Follow logs in real-time +journalctl -n 50 # Show last 50 lines +journalctl --since "1 hour ago" # Logs from last hour +journalctl -p err # Show only error-level messages +journalctl -xe # Show recent logs with explanation +``` + +### nohup & bg/fg - Background Processes +```bash +nohup command & # Run command immune to hangups +jobs # List background jobs +bg %1 # Resume job 1 in background +fg %1 # Bring job 1 to foreground +``` + +--- + +## File System + +### ls - List Directory Contents +```bash +ls -la # Long format with hidden files (l=long, a=all) +ls -lh # Human-readable file sizes +ls -lt # Sort by modification time +ls -ltr # Sort by time, reverse (oldest first) +ls -lS # Sort by file size +``` + +### find - Search for Files +```bash +find /var/log -name "*.log" # Find files by name +find /home -type f -mtime -7 # Files modified in last 7 days +find /tmp -type f -size +100M # Files larger than 100MB +find /var -name "*.log" -mtime +30 -delete # Delete old log files +find . -type f -exec chmod 644 {} \; # Execute command on found files +``` + +### du - Disk Usage +```bash +du -sh * # Summary of each item in current directory (s=summary, h=human-readable) +du -sh /var/log # Total size of directory +du -h --max-depth=1 # Size of subdirectories, 1 level deep +du -ah | sort -rh | head -20 # Top 20 largest files/directories +``` + +### df - Disk Free Space +```bash +df -h # Human-readable disk space +df -i # Show inode usage +df -hT # Include filesystem type +``` + +### tar - Archive Files +```bash +tar -czf archive.tar.gz /path/to/dir # Create compressed archive (c=create, z=gzip, f=file) +tar -xzf archive.tar.gz # Extract compressed archive (x=extract) +tar -tzf archive.tar.gz # List contents without extracting (t=list) +tar -xzf archive.tar.gz -C /dest/path # Extract to specific directory +``` + +### grep - Search Text +```bash +grep -r "error" /var/log # Recursive search (r=recursive) +grep -i "error" file.log # Case-insensitive search +grep -n "error" file.log # Show line numbers +grep -v "info" file.log # Invert match (exclude lines) +grep -A 5 "error" file.log # Show 5 lines after match +grep -B 5 "error" file.log # Show 5 lines before match +grep -C 5 "error" file.log # Show 5 lines before and after +``` + +### chmod/chown - Permissions +```bash +chmod 755 script.sh # rwxr-xr-x permissions +chmod +x script.sh # Add execute permission +chmod -R 644 /var/www # Recursive permission change +chown user:group file # Change owner and group +chown -R www-data:www-data /var/www # Recursive ownership change +``` + +### ln - Create Links +```bash +ln -s /path/to/file link # Create symbolic link (s=symbolic) +ln file hardlink # Create hard link +``` + +--- + +## Networking & Troubleshooting + +### netstat - Network Statistics (legacy) +```bash +netstat -tuln # List listening ports (t=TCP, u=UDP, l=listening, n=numeric) +netstat -plant # Show process using ports (requires root, p=program, a=all) +netstat -r # Show routing table +``` + +### ss - Socket Statistics (modern alternative to netstat) +```bash +ss -tuln # List listening TCP/UDP ports +ss -tulpn # Include process information +ss -s # Show summary statistics +ss -o state established # Show established connections with timer info +``` + +### curl - Transfer Data +```bash +curl -I https://example.com # Fetch headers only (I=head) +curl -o file.txt https://example.com/file # Save to file (o=output) +curl -L https://example.com # Follow redirects (L=location) +curl -X POST -d "data" https://api.example.com # POST request +curl -v https://example.com # Verbose output +curl -k https://example.com # Ignore SSL certificate errors +``` + +### wget - Download Files +```bash +wget https://example.com/file.zip # Download file +wget -c https://example.com/file.zip # Continue interrupted download +wget -r -np -k https://example.com # Mirror website (r=recursive, np=no parent, k=convert links) +``` + +### ping - Test Connectivity +```bash +ping -c 4 google.com # Send 4 packets (c=count) +ping -i 2 google.com # 2 second interval between packets +``` + +### traceroute - Trace Network Path +```bash +traceroute google.com # Show route packets take +traceroute -n google.com # Don't resolve hostnames (faster) +``` + +### dig - DNS Lookup +```bash +dig example.com # Query DNS +dig example.com +short # Brief output +dig @8.8.8.8 example.com # Use specific DNS server +dig example.com MX # Query mail servers +dig -x 8.8.8.8 # Reverse DNS lookup +``` + +### nslookup - DNS Query (alternative) +```bash +nslookup example.com # Simple DNS lookup +nslookup example.com 8.8.8.8 # Use specific DNS server +``` + +### tcpdump - Packet Analyzer +```bash +tcpdump -i eth0 # Capture on interface eth0 +tcpdump -i eth0 port 80 # Capture HTTP traffic +tcpdump -i eth0 -w capture.pcap # Write to file +tcpdump -i eth0 host 192.168.1.1 # Capture traffic to/from specific host +tcpdump -i eth0 -n # Don't resolve hostnames +``` + +### iptables - Firewall Rules +```bash +iptables -L -n -v # List all rules (L=list, n=numeric, v=verbose) +iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow HTTP +iptables -D INPUT 3 # Delete rule 3 from INPUT chain +iptables -F # Flush all rules (careful!) +``` + +### nc (netcat) - Network Swiss Army Knife +```bash +nc -zv host 80 # Test if port is open (z=scan, v=verbose) +nc -l 8080 # Listen on port 8080 +nc host 8080 < file.txt # Send file over network +``` + +### lsof - List Open Files +```bash +lsof -i :80 # Show what's using port 80 +lsof -i TCP:1-1024 # Show processes using ports 1-1024 +lsof -u username # Show files opened by user +lsof -c nginx # Show files opened by nginx +lsof -p PID # Show files opened by specific process +``` + +### ip - Network Configuration (modern alternative to ifconfig) +```bash +ip addr show # Show IP addresses +ip link show # Show network interfaces +ip route show # Show routing table +ip -s link # Show interface statistics +ip addr add 192.168.1.100/24 dev eth0 # Add IP address +``` + +--- + +## Quick Troubleshooting Workflows + +### High CPU Usage +```bash +top +ps aux --sort=-%cpu | head -10 +``` + +### High Memory Usage +```bash +free -h +ps aux --sort=-%mem | head -10 +``` + +### Disk Space Issues +```bash +df -h +du -sh /* | sort -rh | head -10 +find /var/log -type f -size +100M +``` + +### Network Connectivity Issues +```bash +ping -c 4 8.8.8.8 +traceroute google.com +dig example.com +curl -I https://example.com +``` + +### Port Troubleshooting +```bash +ss -tulpn | grep :80 +lsof -i :80 +netstat -tulpn | grep :80 +``` + +### Service Not Starting +```bash +systemctl status service-name +journalctl -u service-name -n 50 +journalctl -xe +``` + +--- +