<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Blog | Padhi Mayank]]></title><description><![CDATA[A tech journal by Mayank Padhi, documenting ML models, Linux experiments, and everything in between, for bits, bytes, and the bystanders watching him debug at 3AM.]]></description><link>https://blog.mayankpadhi.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 06:09:23 GMT</lastBuildDate><atom:link href="https://blog.mayankpadhi.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Tech Behind the Stream: How JioHotstar, Netflix, Zee5, and SonyLIV Power Millions of Screens]]></title><description><![CDATA[The Engine Base
If you look at most OTT architecture diagrams, they look clean, linear, and reassuring.
Real live streaming systems are none of those things.
At scale especially in India , live streaming is not a video problem. It’s a distributed sys...]]></description><link>https://blog.mayankpadhi.com/the-tech-behind-the-stream-how-jiohotstar-netflix-zee5-and-sonyliv-power-millions-of-screens</link><guid isPermaLink="true">https://blog.mayankpadhi.com/the-tech-behind-the-stream-how-jiohotstar-netflix-zee5-and-sonyliv-power-millions-of-screens</guid><category><![CDATA[ottadvertising]]></category><category><![CDATA[netflix]]></category><category><![CDATA[AWS]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[scalability]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Mon, 02 Feb 2026 13:50:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770037696245/10f10208-47b2-4572-9c20-699f01aa1221.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-engine-base">The Engine Base</h2>
<p>If you look at most OTT architecture diagrams, they look clean, linear, and reassuring.</p>
<p>Real live streaming systems are none of those things.</p>
<p>At scale especially in India , live streaming is not a video problem. It’s a <strong>distributed systems chaos management problem</strong> with video as payload.</p>
<p>Your real enemies are:</p>
<ul>
<li><p>Sudden concurrency spikes</p>
</li>
<li><p>Control plane overload</p>
</li>
<li><p>Cost explosions at the Content Delivery Network layer</p>
</li>
<li><p>Ad pipeline latency</p>
</li>
<li><p>DRM bottlenecks nobody load tests properly</p>
</li>
</ul>
<p>If your system survives a normal day, congratulations.<br />If it survives India vs Pakistan final overs — now you’re running a real platform.</p>
<h3 id="heading-the-only-mental-model-that-matters">The Only Mental Model That Matters</h3>
<p>At scale, every streaming platform ends up optimizing three axes:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Axis</td><td>What It Means</td></tr>
</thead>
<tbody>
<tr>
<td>Latency</td><td>How close to real-time users are</td></tr>
<tr>
<td>Reliability</td><td>Whether stream survives regional failures</td></tr>
<tr>
<td>Cost</td><td>CDN + compute + encoding + egress</td></tr>
</tbody>
</table>
</div><p>You only get to optimize two.</p>
<p>Anyone promising all three is either:</p>
<ul>
<li><p>Pre-scale</p>
</li>
<li><p>Hiding cost numbers</p>
</li>
<li><p>Or not running live events yet</p>
</li>
</ul>
<h3 id="heading-what-the-real-stack-looks-like-from-an-operators-pov">What the Real Stack Looks Like (From an Operator’s POV)</h3>
<p>Forget marketing diagrams. | Video rarely kills you. | Control plane almost always does.</p>
<p>Real stack layers behave like blast zones:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770039979553/2ef22fb8-8cb3-4883-ad35-91248fd0c209.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-example-1-jiohotstar-built-for-stampede-traffic-not-average-load"><strong>Example 1: JioHotstar ~</strong> Built For Stampede Traffic, Not Average Load</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758578704135/1baa59a3-fda6-476a-bab9-f2564726cbd8.png" alt class="image--center mx-auto" /></p>
<p>Figure 1. JioHotstar Stream Stack (bundled from sources) <a target="_blank" href="https://blog.hotstar.com/scaling-infrastructure-for-millions-from-challenges-to-triumphs-part-1-6099141a99ef">Reference →</a></p>
<p>The hardest engineering problem in India streaming isn’t sustained throughput. It’s <strong>instantaneous concurrency spikes</strong>.</p>
<p>During cricket:</p>
<ul>
<li><p>Millions join within seconds</p>
</li>
<li><p>Session auth spikes</p>
</li>
<li><p>Manifest requests spike</p>
</li>
<li><p>Telemetry pipelines flood</p>
</li>
</ul>
<p>If you don’t isolate control plane early, you die early.</p>
<h4 id="heading-event-streaming-everywhere-kafka-class-backbone">Event Streaming Everywhere (Kafka-Class Backbone)</h4>
<p>For survival ….</p>
<p>You want:</p>
<ul>
<li><p>Playback telemetry</p>
</li>
<li><p>Session state propagation</p>
</li>
<li><p>Real-time autoscaling signals</p>
</li>
</ul>
<p>The real engineering problem:<br />Hot partitions during single match IDs. ??? If you didn’t simulate “everyone refreshes app during wicket replay”, you’re already in trouble.</p>
<h4 id="heading-memory-first-state-redis-in-memory-grid">Memory-First State (Redis / In-Memory Grid)</h4>
<p>Good for:</p>
<ul>
<li><p>Live state fanout</p>
</li>
<li><p>Session acceleration</p>
</li>
<li><p>Personalization signals</p>
</li>
</ul>
<p>Hidden risk:<br />Cluster rebalance during peak = cascading latency storm.</p>
<h4 id="heading-multi-content-delivery-network-aggressive-routing">Multi-Content Delivery Network Aggressive Routing</h4>
<p>Especially critical in India where last-mile ISP variability is extreme. Tradeoff:<br />More routing logic = more control plane complexity.</p>
<hr />
<h3 id="heading-example-2-netflix-the-cinema-in-your-neighborhood"><strong>Example 2: Netflix ~ The "Cinema in Your Neighborhood"</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758656174777/a4bd9308-cb56-47a4-a605-c8e7e43bfd11.png" alt class="image--center mx-auto" /></p>
<p>Figure 2. Netflix Tech Stack <a target="_blank" href="https://www.skiplevel.co/blog/tech-stack-behind-netflix-streaming-secrets">Source→</a></p>
<h3 id="heading-netflixs-biggest-architectural-win-wasnt-just-encoding-av1-gthttpsresearchnetflixcomresearch-areavideo-encoding-and-quality">Netflix’s biggest architectural win wasn’t just encoding <a target="_blank" href="https://research.netflix.com/research-area/video-encoding-and-quality">AV1 -&gt;</a> .</h3>
<p>It was supply chain. They moved storage + delivery inside ISP networks.</p>
<p>That changes everything:</p>
<ul>
<li><p>Transit cost drops massively</p>
</li>
<li><p>Latency stabilizes</p>
</li>
<li><p>Fewer BGP(<strong><em>routing protocol</em></strong>) surprises</p>
</li>
</ul>
<blockquote>
<p>Netflix operates its proprietary Open Connect CDN, which delivers 100% of video traffic through over 8,000 appliances deployed in close to 1,000 locations worldwide. <a target="_blank" href="https://blog.blazingcdn.com/en-us/cdn-netflix-tech-stack-open-connect-home-caching-nodes">Source→</a></p>
</blockquote>
<p>In India, Netflix has strategically placed Open Connect Appliances within ISP networks to minimize transit costs and improve streaming quality.</p>
<h3 id="heading-why-this-doesnt-fully-solve-live-sports">Why This Doesn’t Fully Solve Live Sports</h3>
<p>Live is unpredictable. You cannot pre-cache future segments.</p>
<p>So live success depends more on:</p>
<ul>
<li><p>Encoder redundancy</p>
</li>
<li><p>Packaging region failover</p>
</li>
<li><p>Manifest service resilience</p>
</li>
</ul>
<p>Not CDN strength alone.</p>
<hr />
<h3 id="heading-example-3-amazon-prime-video-the-ultra-reliable-always-on-machine"><strong>Example 3: Amazon Prime Video ~ The Ultra-Reliable "Always-On" Machine</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759304714283/435a43f2-a419-4370-89f3-f163ebe3dbf2.png" alt class="image--center mx-auto" /></p>
<p>Figure 3. Amazon Prime Stack <a target="_blank" href="https://www.infoq.com/news/2023/10/prime-video-availability-costs/">Source→</a></p>
<p>If JioHotstar is a stadium and Netflix is a neighborhood cinema, Amazon Prime Video is the <strong>mission-critical infrastructure</strong> designed to never fail. Prime Video’s goal is "Five Nines" (99.999%) reliability, meaning less than 26 seconds of downtime per month.</p>
<ul>
<li><p><strong>AWS Elemental Power:</strong> They leverage a specialized suite of tools:</p>
<ul>
<li><p><strong>MediaConnect:</strong> Ingests the raw feed securely.</p>
</li>
<li><p><strong>MediaLive:</strong> Encodes the video into multiple quality levels in real-time.</p>
</li>
<li><p><strong>MediaPackage:</strong> Prepares the video for every possible device, from a 4K TV to an old smartphone.</p>
</li>
</ul>
</li>
<li><p><strong>Dedicated Highways:</strong> They use <strong>Direct Connect</strong> and <strong>Transit Gateways</strong> to create private, high-speed "highways" between the live event (like an NFL stadium) and the AWS cloud, bypassing the messy public internet entirely.</p>
</li>
</ul>
<h3 id="heading-dedicated-media-pipeline-predictability">Dedicated Media Pipeline = Predictability</h3>
<p>General compute works until: Encoder jitter appears, Kernel noise hits real-time workloads , Shared network bursts happen . Dedicated media services trade cost for deterministic behavior. At scale, determinism is cheaper than chaos.</p>
<p>Most teams scale video delivery, but very few scale license servers properly. DRM outages cause “video loads but doesn’t play” , Which users interpret as: “Platform is broken.”</p>
<hr />
<h3 id="heading-example-4-zee5-the-cost-efficient-kitchen"><strong>Example 4: Zee5 ~ The Cost-Efficient Kitchen</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759304749936/242a63b0-cc62-4b84-9b01-93f71dbb531f.png" alt class="image--center mx-auto" /></p>
<p>Figure 4. Zee5 (Bundled from sources) <a target="_blank" href="https://www.medianews4u.com/zee5s-in-house-tech-innovation-sets-stage-for-cost-effective-growth/">Zee5→</a></p>
<p>Zee5 has optimized the "cooking" process to handle India’s diverse mobile landscape.</p>
<ul>
<li><p><strong>Custom Transcoding:</strong> They moved away from generic tools to build an <strong>In-house Transcoder</strong> on Google Cloud.</p>
</li>
<li><p><strong>Hybrid Cloud:</strong> By mixing AWS and GCP via high-speed private links, they’ve reduced file sizes. This means cheaper data for users and faster load times on budget smartphones.</p>
</li>
</ul>
<h3 id="heading-margin-makes-money">!! Margin Makes money !!</h3>
<p>In the high-stakes world of streaming, optimizing the "kitchen" is a matter of financial survival; shaving just 5–8% off your bitrate without compromising visual quality can translate into millions of dollars in annual savings on CDN and delivery costs. This "stakeholder survival math" is what drives platforms like Zee5 to build custom in-house transcoders that prioritize mobile-first efficiency over generic cloud solutions.</p>
<hr />
<h3 id="heading-example-5-sonyliv-the-interaction-engine"><strong>Example 5: SonyLIV ~ The Interaction Engine</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759305373689/850bc4eb-3f60-4007-9ae5-fae191b9533c.png" alt class="image--center mx-auto" /></p>
<p>Figure 5. SonyLIV Stack (Bundled from sources) <a target="_blank" href="https://redis.io/customers/sonyliv/">source→</a></p>
<p>SonyLIV focuses on how to make every second of a stream interactive and profitable.</p>
<ul>
<li><p><strong>Seamless Ads:</strong> They use <strong>Server-Side Ad Insertion (SSAI)</strong>. Instead of the app "pausing" for an ad, the ad is stitched directly into the video stream. No stutter, no "Ad Loading" screens.</p>
</li>
<li><p><strong>Live Engagement:</strong> Using a real-time layer like <strong>Lightstreamer</strong>, they push live polls and quizzes to millions of fans simultaneously without disturbing the video feed.</p>
</li>
</ul>
<hr />
<h2 id="heading-key-takeaways-the-streaming-strategy-matrix"><strong>Key Takeaways: The Streaming Strategy Matrix</strong></h2>
<ul>
<li><p><strong>JioHotstar (Scalability First):</strong> Built to survive massive, sudden traffic spikes by using <strong>Apache Kafka</strong> to decouple systems and a <strong>Multi-CDN strategy</strong> to ensure no single point of failure during national events.</p>
</li>
<li><p><strong>Netflix (Quality First):</strong> Invests heavily in <strong>Edge Computing</strong> through the <strong>Open Connect</strong> program, moving content inside the ISP's network to eliminate latency and provide a premium 4K experience.</p>
</li>
<li><p><strong>Zee5 (Efficiency First):</strong> Focuses on <strong>Cost-Optimization</strong> and mobile performance by leveraging a custom, in-house transcoder on a hybrid cloud setup (AWS and Google Cloud).</p>
</li>
<li><p><strong>SonyLIV (Engagement First):</strong> Prioritizes <strong>Monetization and Interactivity</strong> through <strong>Server-Side Ad Insertion (SSAI)</strong> and real-time messaging layers that keep fans engaged without breaking the stream.</p>
</li>
<li><p><strong>Amazon Prime (Reliability First):</strong> Implements <strong>Multi-Region Redundancy</strong> using the AWS Elemental suite, ensuring that if one entire geographic region fails, a second one is already running to pick up the slack.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Virtualization Race: An Inside Look Hyper Fast  Cloud Virtual Machine Manager]]></title><description><![CDATA[What if the cloud isn’t what it seems? Beyond dashboards and serverless magic, a secret race rages inside the machines. Three giants have built their own beasts to carry the world’s apps. This is how they did it.
Act 1: The Problem with Old‑School Vi...]]></description><link>https://blog.mayankpadhi.com/the-virtualization-race-an-inside-look-hyper-fast-cloud-virtual-machine-manager</link><guid isPermaLink="true">https://blog.mayankpadhi.com/the-virtualization-race-an-inside-look-hyper-fast-cloud-virtual-machine-manager</guid><category><![CDATA[Azure]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[virtual machine]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Tue, 23 Dec 2025 15:48:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766946083647/8208e494-cb13-4dae-b174-fdf6be752e02.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What if the cloud isn’t what it seems? Beyond dashboards and serverless magic, a secret race rages inside the machines. Three giants have built their own beasts to carry the world’s apps. This is how they did it.</p>
<h3 id="heading-act-1-the-problem-with-oldschool-virtualization">Act 1: The Problem with Old‑School Virtualization</h3>
<ul>
<li>The old setup had one big hypervisor doing everything: run VMs, push packets, feed disks, and keep bad actors out. Think of a waiter who must cook, serve, wash dishes, and guard the door all at once. It worked, but it slowed down when the restaurant got busy.</li>
</ul>
<p>Figure 1 . VMware Esxi Architecture (Type 1 hypervisor)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758574332331/73804b1c-ac45-436a-a501-3271185a8122.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Performance overhead and scalability pain: The hypervisor handled networking, storage, and security, eating CPU cycles that should have powered the app.</p>
</li>
<li><p>The cloud couldn’t scale like this. The answer wasn’t a tweak. The answer was to break the problem apart and rebuild it. Here’s how it was done.</p>
</li>
</ul>
<hr />
<h3 id="heading-act-2-the-solutions-the-inhouse-beasts">Act 2: The Solutions – The In‑House Beasts</h3>
<h4 id="heading-chapter-1-aws-the-radical-hardware-revolution-nitro">Chapter 1: AWS – The Radical Hardware Revolution (Nitro)</h4>
<p>Story arc:<br />AWS didn’t just tune a hypervisor; it broke the whole thing into pieces and pushed the heavy lifting into custom cards, leaving a tiny layer to guard CPU and memory. Think of a kitchen where the chef only cooks while robots handle delivery, dishwashing, and the door.</p>
<p>The how:</p>
<ul>
<li><p>Networking offload: A Nitro card takes over VPC networking, so packets don’t steal host CPU time.</p>
</li>
<li><p>Storage offload: Another Nitro card handles EBS and local NVMe I/O, keeping reads/writes fast and steady.</p>
</li>
<li><p>System control + security: A controller and a security chip own secure boot, firmware trust, and management APIs, so the host where apps run stays sealed off.</p>
</li>
<li><p>Minimal hypervisor: A lightweight, KVM‑based layer handles just compute and memory isolation—nothing extra.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758574358504/86c1177e-9e70-42db-9e6e-a7c4fe2150fd.png" alt class="image--center mx-auto" /></p>
<p>Figure 2 . AWS Nitro <a target="_blank" href="https://aws.amazon.com/awstv/watch/f915a84528e/">Link</a></p>
<hr />
<h4 id="heading-chttpsawsamazoncomawstvwatchf915a84528ehapter-2-google-cloud-the-kvm-mastermind-with-a-titanium-heart"><a target="_blank" href="https://aws.amazon.com/awstv/watch/f915a84528e/">C</a>hapter 2: Google Cloud – The KVM Mastermind (with a Titanium Heart)</h4>
<p>Story arc:<br />Google kept KVM at the core, then armored it and gave it a new bodyguard: Titanium—a smart offload layer on the host and across the data‑center fabric.</p>
<p>The how:</p>
<ul>
<li><p>Hardened KVM on the host keeps VMs isolated and tight.</p>
</li>
<li><p>Titanium on‑host offload (think IPU/DPU) takes over packet paths and block I/O so the CPU can focus on apps.</p>
</li>
<li><p>A second tier of scale‑out offloads spreads work across Google’s fabric—Hyperdisk teams up with Colossus to push huge IOPS without upsizing compute.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758574253726/0984b50f-ce9b-4794-9de6-9ca049b4abbd.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>“The control plane dynamically detects flows that exceed a specified usage threshold and programs them to be direct host‑to‑host flows… allowing offload systems to focus on the long tail.”<br /><a target="_blank" href="https://cloud.google.com/blog/products/compute/titanium-underpins-googles-workload-optimized-infrastructure">Source</a></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758574244299/ee06555f-8e95-42d3-a369-38f8d9ca4f53.png" alt class="image--center mx-auto" /></p>
<p>Figure 3. Titanium Offload Block Diagram (simplified) and 4. Titanium in-action</p>
<hr />
<h4 id="heading-chapter-3-microsoft-azure-the-enterprise-champions-proven-path">Chapter 3: Microsoft Azure – The Enterprise Champion’s Proven Path</h4>
<p>Story arc:<br />Azure leaned into its strength: Hyper‑V. It refined a design enterprises know well—parent partition in charge, child partitions for VMs—and made the pathways between them fast and clean.</p>
<p>The how:</p>
<ul>
<li><p>A parent partition owns the hardware and offers services.</p>
</li>
<li><p>Child partitions run the VMs.</p>
</li>
<li><p>They talk over VMBus—high‑speed channels that cut out slow device emulation. Drivers that “know” they’re virtual make it even faster.</p>
</li>
</ul>
<p>Top: “Parent Partition (devices + management)”<br />Below: “Child Partition (VM)” × N<br />Arrows: “VMBus” between parent and each child</p>
<hr />
<h3 id="heading-act-3-the-final-verdict-the-silent-race-continues">Act 3: The Final Verdict – The Silent Race Continues 🏁</h3>
<p>One‑line scoreboard:</p>
<ul>
<li><p>Performance: Offload wins. Nitro and Titanium move I/O away from the host CPU; VMBus keeps Azure’s path lean for Windows‑heavy stacks.</p>
</li>
<li><p>Security posture: Minimal host access and hardware roots of trust are the norm—AWS locks down hosts; Google adds isolation via offload tiers; Hyper‑V enforces strict partition boundaries.</p>
</li>
<li><p>Flexibility: Google’s scale‑out offloads let storage and network scale without resizing compute; AWS keeps shipping new instance types; Azure shines in hybrid cohesion.</p>
</li>
</ul>
<p>Closing shot:<br />This race is quiet, but it powers the internet. Every new offload, tighter lock, and faster data path makes apps snappier and safer—from a single startup to the world’s biggest enterprises. The finish line keeps moving with each new card, bus, and silicon upgrade.</p>
]]></content:encoded></item><item><title><![CDATA[How I Connect to My Homelab from Anywhere with Tailscale]]></title><description><![CDATA[In our last article, I showed you how my lab serves as a powerful, centralized environment for all my coding projects. But what's the point of a central server if you can't access it from anywhere? As a student who splits time between home and a host...]]></description><link>https://blog.mayankpadhi.com/how-i-connect-to-my-homelab-from-anywhere-with-tailscale</link><guid isPermaLink="true">https://blog.mayankpadhi.com/how-i-connect-to-my-homelab-from-anywhere-with-tailscale</guid><category><![CDATA[Homelab]]></category><category><![CDATA[vpn]]></category><category><![CDATA[networking]]></category><category><![CDATA[tailscale]]></category><category><![CDATA[#RemoteAccess]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Mon, 08 Sep 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758652999655/76d25cea-f450-4b88-a357-ecbf98ea9037.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In our last article, I showed you how my lab serves as a powerful, centralized environment for all my coding projects. But what's the point of a central server if you can't access it from anywhere? As a student who splits time between home and a hostel, I needed a way to securely connect my laptop to my homelab no matter where I am.</p>
<p>This article is all about how I solved that problem. We'll look at why a simple solution isn't always the best and then dive into the tool that made my life so much easier.</p>
<h3 id="heading-the-problem-with-port-forwarding">The Problem with Port Forwarding</h3>
<p>When I first started, my initial thought was simple: just open a port on my home router and point it to my homelab's IP address. This is called <strong>port forwarding</strong>. In theory, it works, allowing external traffic to bypass the router and reach a specific device on your internal network.</p>
<p>However, I quickly learned this is a terrible idea. Opening a port on your router is like leaving the front door to your house wide open. It exposes your homelab and potentially your entire home network to the internet. This makes it a prime target for hackers and malicious bots that are constantly scanning for vulnerabilities.</p>
<h3 id="heading-the-solution-a-secure-tunnel">The Solution: A Secure Tunnel</h3>
<p>The answer to this problem is a <strong>Virtual Private Network (VPN)</strong>. Think of a VPN as a private, encrypted tunnel through the public internet. It allows your devices to communicate with your homelab as if they were on the same local network, even when they're miles apart. This keeps your connection private and secure, as no one can see what's happening inside the tunnel.</p>
<p>Traditionally, setting up a VPN server can be complicated. You'd need to configure a server on your homelab with something like <strong>OpenVPN</strong> or <strong>WireGuard</strong>, manage certificates and keys for each device, and deal with complex firewall rules. As a beginner, this felt like a huge obstacle.</p>
<h3 id="heading-the-game-changer-tailscale">The Game-Changer: Tailscale</h3>
<p>Then I found <strong>Tailscale</strong>. It completely changed the game for me. Tailscale is a <strong>mesh VPN</strong> service that builds on the <strong>WireGuard</strong> protocol but makes the setup incredibly simple. It creates a private network that securely connects all your devices—laptops, phones, servers—regardless of their location.</p>
<p>Here’s how it works:</p>
<ol>
<li><p><strong>Install Tailscale on each device.</strong> I installed it on my homelab server, my laptop, and my phone.</p>
</li>
<li><p><strong>Log in with the same account.</strong> As soon as each device logs in, they all become part of the same private Tailscale network.</p>
</li>
<li><p><strong>Done!</strong> That’s it. There's no need to configure complex firewall rules or mess with router settings.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758650880022/06451818-2850-4b8a-9800-79dc87709785.png" alt class="image--center mx-auto" /></p>
<p>Figure . V P N</p>
<p>With Tailscale, I can access my homelab server from my hostel as if it were sitting right next to me. I can securely SSH into my server, connect to my VS Code Server instance, and even access my self-hosted services using their private Tailscale IP addresses. It’s all seamless, secure, and incredibly easy.</p>
<p>For a beginner like me, Tailscale was a godsend. It removed the biggest barrier to remote access and allowed me to get straight to the fun part of using my homelab. And it helps me reduce complexity to access my node in my parent’s home.</p>
]]></content:encoded></item><item><title><![CDATA[From Laptop to Lab: Building My Remote Coding Setup]]></title><description><![CDATA[One of the main reasons I built my lab was to have a powerful, centralized environment for all my coding projects. My daily driver is a capable laptop, but running resource-intensive tasks on it can quickly drain the battery and slow things down. By ...]]></description><link>https://blog.mayankpadhi.com/from-laptop-to-lab-building-my-remote-coding-setup</link><guid isPermaLink="true">https://blog.mayankpadhi.com/from-laptop-to-lab-building-my-remote-coding-setup</guid><category><![CDATA[JupyterLab]]></category><category><![CDATA[VS Code]]></category><category><![CDATA[Conda ]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Sun, 07 Sep 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758653505680/aabba604-eaee-4885-8d1c-52cb26f51aff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the main reasons I built my lab was to have a powerful, centralized environment for all my coding projects. My daily driver is a capable laptop, but running resource-intensive tasks on it can quickly drain the battery and slow things down. By moving my development environment to my homelab, I get a consistent, powerful, and accessible workspace from any device.</p>
<h3 id="heading-the-power-of-vs-code-server">The Power of VS Code Server</h3>
<p>My coding journey revolves around Visual Studio Code (except <strong>CMake</strong>). It's a fantastic editor, and the best part is that I don't need to run it on my laptop. Instead, I installed <strong>VS Code Server</strong> on my homelab server. Once it was installed, I could access it through a web browser on any of my devices—my laptop, my phone, or even the old desktop at home.</p>
<p>Although i am in trail phase , the experience is seamless. It feels exactly like the desktop version of VS Code, with all my familiar themes and extensions already there. I can open my code files, run terminal commands, and debug my programs from anywhere. All the heavy lifting, like compiling large projects or running tests, is done on my powerful Dell workstation, so my old laptop stays cool and quiet.</p>
<h3 id="heading-jupyterlab-for-data-science-and-learning">JupyterLab for Data Science and Learning</h3>
<p>After working with cloud platforms like Google Cloud Platform's Vertex AI Workbench and Google Colab, I've grown incredibly comfortable with their interactive environments. They've become my go-to for all my coursework, especially for data science and machine learning projects. I love how these tools let me write and run code, visualize data, and create documentation all in one place.</p>
<p>The biggest challenge I faced early on was managing dependencies. Different projects require different versions of Python libraries, and I quickly ran into what people call "dependency hell." The solution? <strong>Conda</strong>. I created multiple <strong>Python environments</strong>, each with its own set of libraries for specific projects. For instance, I have one environment for a machine learning project using TensorFlow and another for a data analysis project using Pandas and NumPy. This simple setup has saved me from countless headaches.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758648293510/967ef35d-251c-4f68-aa92-043cf04bd709.png" alt class="image--center mx-auto" /></p>
<p>Image 1. Locally Hosted JupterLab (example code)</p>
<h3 id="heading-the-ultimate-hybrid-workflow-google-colab-local-runtime">The Ultimate Hybrid Workflow: Google Colab + Local Runtime</h3>
<p>Here's a cool trick I discovered that has been a game-changer for me. I can connect <strong>Google Colab</strong> to my local JupyterLab runtime. This gives me the best of both worlds: the familiar and powerful Google Colab interface, but with all the processing power and storage of my homelab. It's perfect for when I want to use Google's features while keeping my data and computations on my own hardware.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758648543001/aedee665-54ba-40cc-b5e0-dae85b7e3160.png" alt class="image--center mx-auto" /></p>
<p>Image 2. Local Runtime connected</p>
<p><strong>My pro tips for a remote coding setup:</strong></p>
<ul>
<li><p><strong>Always use virtual environments.</strong> Whether it's Conda or venv, isolating project dependencies will save you a world of pain.</p>
</li>
<li><p><strong>Invest in a good internet connection.</strong> My 40 Mbps downstream is more than enough for a smooth remote coding experience.</p>
</li>
<li><p><strong>Version control is your best friend.</strong> Get into the habit of using Git for all your projects, even the small ones.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Homelab Devices: The Hardware That Powers Me]]></title><description><![CDATA[¡Hola! Soy Dora, can you see my homelab , No !!! But I’m building one! Like many of you, I started this journey not as a seasoned pro, but as someone who wanted to learn by doing. I've spent years watching YouTube videos (LTT) and reading guides, and...]]></description><link>https://blog.mayankpadhi.com/homelab-devices-the-hardware-that-powers-me</link><guid isPermaLink="true">https://blog.mayankpadhi.com/homelab-devices-the-hardware-that-powers-me</guid><category><![CDATA[proxmox]]></category><category><![CDATA[vmware]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[ubuntu-server]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Sat, 06 Sep 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758653694989/a5f97930-81df-4058-adf3-27b5af154137.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>¡Hola! Soy Dora, can you see my homelab , No !!! But I’m building one! Like many of you, I started this journey not as a seasoned pro, but as someone who wanted to learn by doing. I've spent years watching YouTube videos (LTT) and reading guides, and now I'm here to share my own personal setup. This first article is all about the hardware—the physical and virtual devices that make my homelab tick.</p>
<h3 id="heading-le-fond-a-repurposed-workstation">le fond : A Repurposed Workstation</h3>
<p>When I decided to build a homelab, I didn't want to buy a new server(money!!). Instead, I opted to repurpose a powerful workstation I found. My main server is a <strong>Dell Precision 7910</strong>. It’s a beast of a machine, and for me, that power is what makes this lab work.</p>
<p>The specs are overkill for a beginner, but that's exactly what I wanted. This machine has two <strong>Intel Xeon E5-2673 v4 CPUs</strong>, giving me a massive <em>40 cores and 80 threads</em> to play with. This allows me to run multiple virtual machines and containers without a hitch. I've also got <strong>64GB of RAM</strong> and a combination of a <strong>512GB SSD</strong> for my operating systems and a <strong>2TB HDD</strong> for general storage. With space for future expansion with 8+ drive bays, 4 PCIe4 x8 Slots and 1300 W of Power Supply with upto 1TB RAM . With all the Virtualization and Remote MAGIC .</p>
<p>To protect this valuable machine and my data, I have a <strong>Schneider APC 1100 VA UPS</strong>. Best Purchase as for frequent power cut in my area .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758639481548/4c1f04e5-aa5f-4171-93cc-cea039e3aa13.jpeg" alt class="image--center mx-auto" /></p>
<p>Image . Workstation</p>
<h3 id="heading-the-network-backbone">The Network Backbone</h3>
<p>A good homelab is built on a solid network, even a simple one. My setup begins with my <strong>Airtel ISP router</strong>. It provides me with fast internet—I get around <strong>45 Mbps upstream</strong> and <strong>40 Mbps downstream</strong> with a <strong>1 Gbps router</strong>.</p>
<p>From there, my devices are connected to a simple <strong>D-Link DGS-1008A 8-port switch</strong>. This is an <strong>unmanaged switch</strong>, which means it's incredibly simple and just works. It acts like a network power strip, allowing me to connect all my wired devices without any complicated configuration as i only have 1 LAN port on my ISP router.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758638997528/5c387c94-2ca6-4a0c-aa9e-9638ae02b412.png" alt class="image--center mx-auto" /></p>
<p>Figure . Simplified Network Diagram</p>
<h3 id="heading-the-software-and-virtualization-layer">The Software and Virtualization Layer</h3>
<p>For my lab, I chose <strong>Proxmox Virtual Environment</strong> as my hypervisor. I picked it because it’s free and offers a fantastic web interface that makes managing my virtual machines a breeze.</p>
<p>Within Proxmox, my primary server is an <strong>Ubuntu Server 22.04 VM</strong>. It's where I host my <strong>VS Code Server</strong> and <strong>JupyterLab</strong> for remote coding. and most of the current projects . I'm also using my lab as a learning sandbox; I'm even running a separate <strong>ESXi VM within Proxmox</strong> to learn about different virtualization platforms without needing dedicated hardware.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758640188634/5124596f-3321-49b9-a633-a4e49a841af0.png" alt class="image--center mx-auto" /></p>
<p>Image . Proxmox Dashboard</p>
<h3 id="heading-storage-backups-and-other-devices">Storage, Backups, and Other Devices</h3>
<p>When it comes to storage and data protection, my approach is simple and honest. I don't have a RAID setup because I don't have a lot of mission-critical data. If a drive fails, I'll just replace it and start over. For important files, I have a simple backup strategy: I periodically back up my critical VMs to my laptop and create weekly snapshots of key data to Google Cloud Storage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758641132559/b4b43394-457a-46c1-93ba-b74b30356a8d.png" alt class="image--center mx-auto" /></p>
<p>Figure . Backup Flow Diagram</p>
<p>In addition to my main workstation, I also have a <strong>Dell Inspiron 3505 laptop</strong> as my daily driver. It has a <strong>Ryzen 3 CPU</strong>, <strong>16GB of RAM</strong>, a <strong>512GB SSD</strong>, and a <strong>1TB HDD</strong>. I use it for daily tasks and as a backup location for my homelab. I also have an older <strong>i5 6th gen system</strong> with <strong>8GB of RAM</strong> and a <strong>2TB HDD</strong> running Linux Mint at my home, which I'll discuss in a future article on remote networking.</p>
<p>That’s a wrap on the hardware that powers my homelab although very small. In the next article, I'll show you how I've configured my systems to create a powerful remote coding environment using <strong>VS Code Server</strong> and <strong>JupyterLab</strong>. Stay tuned!</p>
]]></content:encoded></item><item><title><![CDATA[SelfMate : An AI shopping assistant for Walmart Ecosystem]]></title><description><![CDATA[Meet an AI shopping assistant that sees, thinks, and acts in real time—finding the right product, checking local stock, applying the best price, and explaining why, often in under a heartbeat.

Hero stats:

<200 ms cached replies

10k+ concurrent use...]]></description><link>https://blog.mayankpadhi.com/selfmate-an-ai-shopping-assistant-for-walmart-ecosystem</link><guid isPermaLink="true">https://blog.mayankpadhi.com/selfmate-an-ai-shopping-assistant-for-walmart-ecosystem</guid><category><![CDATA[GCP]]></category><category><![CDATA[Walmart sparkathon]]></category><category><![CDATA[Function Calling]]></category><category><![CDATA[chatbot]]></category><category><![CDATA[Retail]]></category><category><![CDATA[#retailsoftware]]></category><category><![CDATA[qdrant]]></category><category><![CDATA[geminiAPI]]></category><category><![CDATA[realtime]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Sun, 17 Aug 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758653549043/8dc452a7-0b47-43d7-94bf-51a68d628b4a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Meet an AI shopping assistant that sees, thinks, and acts in real time—finding the right product, checking local stock, applying the best price, and explaining why, often in under a heartbeat.</p>
<ul>
<li><p><strong><em>Hero stats:</em></strong></p>
<ul>
<li><p>&lt;200 ms cached replies</p>
</li>
<li><p>10k+ concurrent users</p>
</li>
<li><p>95%+ recommendation accuracy (stress‑tested in production)</p>
</li>
</ul>
</li>
<li><p><strong><em>Engineered for scale:</em></strong> microservices, Gemini 2.5 Pro multi‑model, and a polyglot data layer tuned for retail .That can serve on all platform : shopping platform, kisosk and as standalone chatbot app .</p>
</li>
</ul>
<h3 id="heading-user-pain-hold-and-why-this-problem-statement">User Pain Hold and Why this Problem Statement :</h3>
<p>The <strong>user pain</strong> is clear: keyword boxes and endless catalogs slow decisions, especially in‑aisle where price, diet, and local stock all collide; expectations set by Amazon’s constant tech leaps demand effortless, guided discovery, so Walmart must turn its unmatched offline footprint into a digital advantage.</p>
<p>This problem statement targets that gap: a lightweight assistant that elevates the in‑store experience for shoppers and a companion extension for associates speeding availability checks, suggesting substitutes, and accelerating pick/pack reducing abandonment while converting Walmart’s physical scale into a durable edge.</p>
<h3 id="heading-system-at-a-glance">System at a Glance :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758617284500/7f4128a5-0df7-4abe-93e2-992bcf5ddf01.png" alt class="image--center mx-auto" /></p>
<p>Figure 1. System Architecture <a target="_blank" href="https://www.eraser.io/ai">Tool Used→</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758617119102/c1aef082-9c68-43b9-803f-9f31ee2f1603.png" alt class="image--center mx-auto" /></p>
<p>Figure 2. Data Flow Diagram</p>
<h3 id="heading-how-it-works-in-5-beats">How it works (in 5 beats) :</h3>
<ul>
<li><p><strong><em>Intent lock:</em></strong> Domain prompts route queries to “search, compare, refine” with schema‑locked tools.</p>
</li>
<li><p><strong><em>Dual‑tier routing:</em></strong> Tier 1 handles the essentials instantly; Tier 2 adds availability, alternatives, and trends when context demands.</p>
</li>
<li><p><strong><em>Search + semantics:</em></strong> Typesense narrows fast; Qdrant re‑ranks by meaning and visuals; fused for precision.</p>
</li>
<li><p><strong><em>Reality check:</em></strong> Live inventory and regional pricing blend into the final rank so answers match shelf truth.</p>
</li>
<li><p><strong><em>Personal touch:</em></strong> Profile, behavior, and store affinity shape a personalization score alongside relevance.</p>
</li>
</ul>
<h3 id="heading-user-experience">User Experience:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758620171919/716c320d-2248-4990-90d5-da369879c7d6.png" alt class="image--center mx-auto" /></p>
<p>Figure 3. User Flow Diagram</p>
<p>Start with <strong>scan, snap, ask</strong>: users scan a shelf QR/tag, snap an item photo, or speak/type “gluten‑free cereal under INR 100 near 75001,” the assistant locks intent and fires Tier 1 search to filter by price and diet, fuses semantic matches via Typesense and Qdrant, escalates to Tier 2 for live availability, pickup/delivery and nearest‑store distance, then returns a tight “Your Smart Picks” panel with the best option (₹85), why‑this choice, and healthier alternatives—one smooth, real‑time glide from question to add‑to‑cart .</p>
<h3 id="heading-lesson-learned">Lesson Learned :</h3>
<ul>
<li><p><strong>Accuracy and Relevance:</strong> We fixed hallucinations and relevance issues by using a hybrid search approach (Typesense + Qdrant re-ranking) and ensuring that all responses are grounded in real data, with built-in retries for failures.</p>
</li>
<li><p><strong>Latency and Efficiency:</strong> We dramatically reduced latency spikes by implementing parallel processing, short-term caching, and setting strict time limits for each step. We also managed token costs by using a lower model temperature and compressing memory.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[LXC Container + Tailscale Setup on Proxmox
(We have got vpn at home)]]></title><description><![CDATA[LXC Container + Tailscale Setup on Proxmox
Part of the Proxmox Baremetal Journey series
I wanted to make my Proxmox homelab more secure and flexible by using Tailscale on an Ubuntu 22.04 LXC container. The end goal: turn the LXC into a subnet router ...]]></description><link>https://blog.mayankpadhi.com/lxc-container-tailscale-setup-on-proxmox-we-have-got-vpn-at-home</link><guid isPermaLink="true">https://blog.mayankpadhi.com/lxc-container-tailscale-setup-on-proxmox-we-have-got-vpn-at-home</guid><category><![CDATA[tailscale]]></category><category><![CDATA[LXC]]></category><category><![CDATA[virtual machine]]></category><category><![CDATA[vpn]]></category><category><![CDATA[proxmox]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Wed, 06 Aug 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757532540379/db649d00-34de-496e-956a-545e6c6cdbce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-lxc-container-tailscale-setup-on-proxmox">LXC Container + Tailscale Setup on Proxmox</h1>
<p><em>Part of the</em> <strong><em>Proxmox Baremetal Journey</em></strong> <em>series</em></p>
<p>I wanted to make my Proxmox homelab more secure and flexible by using <strong>Tailscale</strong> on an <strong>Ubuntu 22.04 LXC container</strong>. The end goal: turn the LXC into a subnet router and exit node for my entire Tailscale network. And later add IDS/IPS for experimentation.  This post documents the steps to install and set-up tailscale.</p>
<p>Think of this as both a <strong>how-to guide</strong> and a <strong>"learn from my setup journey"</strong> story.</p>
<hr />
<h2 id="heading-1-preparing-the-lxc-in-proxmox">1. Preparing the LXC in Proxmox</h2>
<p>When creating your container in Proxmox, you’ll need to enable nesting and ensure the tun device is available.</p>
<p>Here is my config : <code>/etc/pve/lxc/100.conf</code> (for container CT100):</p>
<pre><code class="lang-ini">arch: amd64
cores: 2
features: <span class="hljs-attr">nesting</span>=<span class="hljs-number">1</span>
hostname: CT100
memory: 512
net0: <span class="hljs-attr">name</span>=eth0,bridge=vmbr0,hwaddr=BC:<span class="hljs-number">24</span>:<span class="hljs-number">11</span>:<span class="hljs-number">58</span>:<span class="hljs-number">98</span>:<span class="hljs-number">29</span>,ip=dhcp,type=veth
rootfs: local-lvm:vm-100-disk-0,<span class="hljs-attr">size</span>=<span class="hljs-number">8</span>G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,<span class="hljs-attr">create</span>=file
</code></pre>
<h2 id="heading-2-configure-netplan-for-networking">2. Configure Netplan for Networking</h2>
<p>Inside the container, set up DHCP with Netplan:</p>
<pre><code class="lang-bash">nano /etc/netplan/50-cloud-init.yaml
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">network:</span>
  <span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">ethernets:</span>
    <span class="hljs-attr">eth0:</span>
      <span class="hljs-attr">dhcp4:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>Apply changes:</p>
<pre><code class="lang-bash">sudo netplan apply
</code></pre>
<p>Verify IP:</p>
<pre><code class="lang-bash">ip a
</code></pre>
<p><em>Screenshot of IP output:</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757531550269/39f9adf2-c25d-4f6d-80c8-89737cbbbcb9.png" alt class="image--center mx-auto" /></p>
<p><em>Screenshot of netplan config:</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757531570783/5663eb47-3f99-47b2-a0a9-8259ac4524da.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-3-install-tailscale">3. Install Tailscale</h2>
<p>Follow the <a target="_blank" href="https://tailscale.com/kb/1187/install-ubuntu-2204">official guide</a>:</p>
<pre><code class="lang-bash">curl -fsSL https://tailscale.com/install.sh | sh
</code></pre>
<p>Bring Tailscale up:</p>
<pre><code class="lang-bash">sudo tailscale up
</code></pre>
<p>A browser window will open asking to connect your device.</p>
<p><em>Tailscale auth screen:</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757531760498/09079f0c-304a-4d90-b46e-e60b2cb378fd.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-4-enable-subnet-routing-amp-exit-node">4. Enable Subnet Routing &amp; Exit Node</h2>
<p>Advertise your container as an exit node:</p>
<pre><code class="lang-bash">sudo tailscale up --advertise-exit-node
</code></pre>
<p>Or to route your LAN subnet:</p>
<pre><code class="lang-bash">sudo tailscale up --advertise-routes=&lt;&gt;
</code></pre>
<p>Approve routes in the Tailscale admin console.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757531914595/9701b40f-f7b1-4fdd-8049-53946137a753.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-5-using-the-lxc-as-exit-node">5. Using the LXC as Exit Node</h2>
<p>On another Tailscale device, set the container as your exit node:</p>
<pre><code class="lang-bash">sudo tailscale up --exit-node=&lt;LXC_IP&gt;
</code></pre>
<p>Replace <code>&lt;LXC_IP&gt;</code> with the container’s Tailscale or LAN IP.</p>
<hr />
<h2 id="heading-6-handling-key-expiry">6. Handling Key Expiry</h2>
<p>By default, Tailscale auth keys expire. If you need reusable or long-lived keys, check <a target="_blank" href="https://tailscale.com/kb/1028/key-expiry">Key Expiry Docs</a>.</p>
<p>Generate and use reusable keys for automation.</p>
<hr />
<h2 id="heading-takeaways">Takeaways</h2>
<ul>
<li><p>Nesting + tun device are <strong>required</strong> in the LXC config.</p>
</li>
<li><p>Netplan must be configured with DHCP.</p>
</li>
<li><p>Tailscale can easily advertise routes and exit node.</p>
</li>
<li><p>Keys expire unless replaced with reusable ones.</p>
</li>
</ul>
<p>With this setup, my Proxmox-hosted LXC is now a fully functioning <strong>VPN gateway + exit node</strong> for my Tailscale network.</p>
]]></content:encoded></item><item><title><![CDATA[Installing Proxmox Backup Server (PBS) on Ubuntu 22.04 Laptop]]></title><description><![CDATA[Part of the Proxmox Baremetal Journey series
I broke my install more times than I'd like to admit, but by the end I learned a ton about how PBS works under the hood. This post documents everything I learned while setting up Proxmox Backup Server to h...]]></description><link>https://blog.mayankpadhi.com/installing-proxmox-backup-server-pbs-on-ubuntu-2204-laptop</link><guid isPermaLink="true">https://blog.mayankpadhi.com/installing-proxmox-backup-server-pbs-on-ubuntu-2204-laptop</guid><category><![CDATA[proxmox]]></category><category><![CDATA[proxmox backup]]></category><category><![CDATA[ProxmoxVE]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[baremetal]]></category><category><![CDATA[virtual machine]]></category><dc:creator><![CDATA[Mayank Padhi]]></dc:creator><pubDate>Mon, 04 Aug 2025 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757343940100/01e7947b-f4a6-438b-ae1d-ac554a7c19b8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Part of the</em> <strong><em>Proxmox Baremetal Journey</em></strong> <em>series</em></p>
<p>I broke my install more times than I'd like to admit, but by the end I learned a ton about how PBS works under the hood. This post documents everything I learned while setting up Proxmox Backup Server to help others avoid my mistakes and establish production-level workflows.</p>
<p>Think of this as both a <strong>how-to guide</strong> and a <strong>"don't make my mistakes"</strong> story.</p>
<h2 id="heading-what-went-wrong-and-how-i-fixed-it">🔧 What Went Wrong (And How I Fixed It)</h2>
<p>PBS is <strong>very strict</strong> about ownership and permissions. If they're wrong, nothing works. Here are the main traps I fell into:</p>
<p><strong>Permission Issue #1</strong> → The <code>proxmox-backup</code> directory was owned by <code>root:root</code>. PBS needs it owned by <code>backup:backup</code>.</p>
<p><strong>Permission Issue #2</strong> → I used <code>755</code> for the config directory. PBS requires <code>700</code>.</p>
<p><strong>Permission Issue #3</strong> → My datastore subdirectories weren't owned by <code>backup:backup</code>. That blocked PBS from creating the crucial <code>.chunks</code> directory.</p>
<h2 id="heading-clean-installation-guide">🛠 Clean Installation Guide</h2>
<h3 id="heading-1-update-amp-prepare-system">1. Update &amp; Prepare System</h3>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade -y
sudo apt install curl wget gnupg lsb-release apt-transport-https -y
</code></pre>
<h3 id="heading-2-add-the-proxmox-repository">2. Add the Proxmox Repository</h3>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription"</span> | sudo tee /etc/apt/sources.list.d/pbs.list
wget -qO- http://download.proxmox.com/debian/proxmox-release-bookworm.gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
</code></pre>
<h3 id="heading-3-fix-gpg-key-issues-if-needed">3. Fix GPG Key Issues (if needed)</h3>
<pre><code class="lang-bash">sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1140AF8F639E0C39
sudo apt update
</code></pre>
<h3 id="heading-4-remove-old-packages-for-a-fresh-reinstall">4. Remove Old Packages (for a fresh reinstall)</h3>
<pre><code class="lang-bash">sudo apt-get purge proxmox-backup proxmox-backup-client proxmox-backup-docs proxmox-backup-server -y
sudo apt-get purge proxmox-kernel-* proxmox-default-kernel -y
sudo rm -rf /etc/proxmox-backup /etc/systemd/system/proxmox-backup*
sudo rm -rf /mnt/pbs_backup/*
</code></pre>
<h3 id="heading-5-install-pbs">5. Install PBS</h3>
<pre><code class="lang-bash">sudo apt install proxmox-backup-server -y
</code></pre>
<h3 id="heading-6-create-datastore-directory">6. Create Datastore Directory</h3>
<pre><code class="lang-bash">sudo mkdir -p /mnt/pbs_backup/datastore1
</code></pre>
<h3 id="heading-7-fix-permissions">7. Fix Permissions</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Datastore ownership</span>
sudo chown -R backup:backup /mnt/pbs_backup/datastore1

<span class="hljs-comment"># Create and secure PBS config directory </span>
sudo mkdir -p /etc/proxmox-backup
sudo chown -R backup:backup /etc/proxmox-backup
sudo chmod 700 /etc/proxmox-backup

<span class="hljs-comment"># Manually create .chunks directory (if PBS doesn't create it automatically)</span>
sudo -u backup mkdir -p /mnt/pbs_backup/datastore1/.chunks
</code></pre>
<h3 id="heading-8-start-pbs-services">8. Start PBS Services</h3>
<pre><code class="lang-bash">sudo systemctl start proxmox-backup
sudo systemctl <span class="hljs-built_in">enable</span> proxmox-backup
sudo systemctl start proxmox-backup-proxy
sudo systemctl <span class="hljs-built_in">enable</span> proxmox-backup-proxy
</code></pre>
<h2 id="heading-key-takeaways">📘 Key Takeaways</h2>
<p><strong>Permissions are critical:</strong></p>
<ul>
<li><code>/etc/proxmox-backup</code> → must be <code>backup:backup</code> with <code>700</code></li>
<li>Datastores → must be <code>backup:backup</code> so PBS can create <code>.chunks</code></li>
</ul>
<p><strong>Two services must run:</strong></p>
<ul>
<li><code>proxmox-backup</code> → API server</li>
<li><code>proxmox-backup-proxy</code> → Web interface</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757343260208/26fc6072-09e6-4a4f-957b-0b4b88a44153.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-troubleshooting-tips">Troubleshooting Tips</h2>
<p>If you encounter issues, check these common points:</p>
<ol>
<li><p>Verify service status: <code>sudo systemctl status proxmox-backup proxmox-backup-proxy</code></p>
</li>
<li><p>Check permissions: <code>ls -la /etc/proxmox-backup</code> and <code>ls -la /mnt/pbs_backup/datastore1</code></p>
</li>
<li><p>Review logs: <code>sudo journalctl -u proxmox-backup -f</code></p>
</li>
</ol>
<p>Remember: when in doubt, proper permissions are usually the answer with PBS.</p>
]]></content:encoded></item></channel></rss>