[{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/tags/migration/","section":"Tags","summary":"","title":"Migration","type":"tags"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/tags/nsx-t/","section":"Tags","summary":"","title":"Nsx-T","type":"tags"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/tags/upgrade/","section":"Tags","summary":"","title":"Upgrade","type":"tags"},{"content":" 📌 FEATURED POST vSphere 7 to 8 Migration — Complete Guide Step-by-step upgrade without downtime. What to check before and after, NSX-T gotchas and common mistakes. Read → ","date":"25 April 2026","externalUrl":null,"permalink":"/","section":"virtualles","summary":"","title":"virtualles","type":"page"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/categories/vmware/","section":"Categories","summary":"","title":"VMware","type":"categories"},{"content":"","date":"25 April 2026","externalUrl":null,"permalink":"/tags/vsphere/","section":"Tags","summary":"","title":"Vsphere","type":"tags"},{"content":" Introduction # [Uzupełnij — po co ten artykuł, dla kogo jest, co czytelnik z niego wyniesie.]\nPrerequisites # vCenter 7.0 U3 lub nowszy Wszystkie hosty co najmniej ESXi 7.0 U3 Ważna licencja vSphere 8 Snapshot / backup VM vCenter Step 1 — Pre-upgrade checks # [Uzupełnij poleceniami i weryfikacjami, które wykonujesz przed upgrade\u0026rsquo;em.]\n# Przykład: sprawdzenie wersji vCenter vpxd --version Step 2 — Upgrade vCenter first # Zawsze najpierw aktualizuj vCenter, dopiero potem hosty. Nigdy odwrotnie.\n[Uzupełnij krokami z GUI VAMI lub CLI.]\nStep 3 — Upgrade ESXi hosts # Wprowadź każdy host w Maintenance Mode, zaktualizuj, zweryfikuj, przejdź do następnego.\n[Uzupełnij sekwencją dla klastra DRS / HA.]\nCommon gotchas # [Uzupełnij realnymi problemami, które napotkałeś — np. NSX-T compatibility matrix, TPM requirements, VMFS alignment.]\nConclusion # [Uzupełnij podsumowaniem — co zyskujesz po migracji, czy warto, jakie nowe funkcje vSphere 8 są przydatne.]\n","date":"25 April 2026","externalUrl":null,"permalink":"/posts/vsphere-8-upgrade-guide/","section":"Posts","summary":"","title":"vSphere 7 to 8 Migration — Complete Guide","type":"posts"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/esxi/","section":"Tags","summary":"","title":"Esxi","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/hardware/","section":"Tags","summary":"","title":"Hardware","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/categories/homelab/","section":"Categories","summary":"","title":"Homelab","type":"categories"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/homelab/","section":"Tags","summary":"","title":"Homelab","type":"tags"},{"content":"If you\u0026rsquo;re building a homelab on a budget, mini PCs running Intel N-series processors are hard to ignore. I\u0026rsquo;ve been running both the N100 and the N305 as ESXi 8 nodes for the past few months. Here\u0026rsquo;s what I found.\nThe contenders # Intel N100 Intel N305 Cores / Threads 4C / 4T 8C / 8T TDP 6W 15W Max RAM 16 GB DDR5 32 GB DDR5 Base clock 800 MHz 800 MHz Boost clock 3.4 GHz 3.8 GHz Typical price (unit) ~$150–200 ~$250–350 ESXi 8 compatibility # Both processors install ESXi 8.0 U2+ cleanly. You will not need any custom ISOs or workarounds — just the standard VMware installer.\nOne thing to watch: NIC compatibility. Most mini PCs ship with Realtek NICs, which ESXi 8 no longer supports natively. You have two options:\nAdd a USB-to-Ethernet adapter (Intel I225-V chipset preferred) Use the community VMware NIC driver fling — works but adds management overhead RAM — the real decision factor # This is where the N305 wins clearly. The N100 is hard-capped at 16 GB. That sounds fine until you try running 4–5 VMs with any real workload — vCenter alone eats 14 GB.\nThe N305 supports 32 GB, which is the sweet spot for a single-node homelab running vCenter + 3–5 workload VMs.\nPower and noise # The N100 is essentially silent and sips power. Under full ESXi load I measured 8–12W at the wall. The N305 runs at 18–25W under the same conditions — still very reasonable.\nFor always-on homelabs, the N100\u0026rsquo;s efficiency is genuinely attractive.\nMy verdict # N100 — best if you already have a vCenter elsewhere (or use free ESXi without vCenter) and want a cheap compute node. 16 GB cap is a real constraint. N305 — the better all-in-one node. 32 GB lets you run vCenter + real workloads on a single box. I\u0026rsquo;m running the N305 as my primary lab node and the N100 as a second compute host in a 2-node vSAN cluster. Works great.\nWhat I bought # Beelink EQ12 Pro (N100, 16GB, 500GB NVMe) — ~$180 Beelink EQ13 (N305, 32GB, 1TB NVMe) — ~$290 Intel I225-V USB adapter — ~$20 Total for a 2-node lab: under $500. Not bad.\n","date":"8 April 2026","externalUrl":null,"permalink":"/posts/homelab-mini-pc-esxi-n100-n305/","section":"Posts","summary":"","title":"Mini PC as ESXi node — N100 vs N305 comparison","type":"posts"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/mini-pc/","section":"Tags","summary":"","title":"Mini-Pc","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/n100/","section":"Tags","summary":"","title":"N100","type":"tags"},{"content":"","date":"8 April 2026","externalUrl":null,"permalink":"/tags/n305/","section":"Tags","summary":"","title":"N305","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/cis-benchmark/","section":"Tags","summary":"","title":"Cis-Benchmark","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/categories/cybersecurity/","section":"Categories","summary":"","title":"Cybersecurity","type":"categories"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/hardening/","section":"Tags","summary":"","title":"Hardening","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":"","date":"2 April 2026","externalUrl":null,"permalink":"/tags/vcenter/","section":"Tags","summary":"","title":"Vcenter","type":"tags"},{"content":"Every time I set up a new vCenter I run through the same list. Here it is — the things that actually matter, in order of impact.\n1. Change the default SSO domain # The default vsphere.local domain is well-known. Change it during install or shortly after. You can\u0026rsquo;t rename it post-install, so this one needs to happen early.\n2. Disable the default Administrator account # Create a named admin account first, verify it works, then disable administrator@vsphere.local.\n# From vCenter Shell (SSH) /usr/lib/vmware-vmafd/bin/dir-cli user modify \\ --account administrator \\ --disable \\ --login administrator@vsphere.local \\ --password \u0026#39;YourPassword\u0026#39; 3. Restrict SSH access to vCenter # SSH on vCenter should be off unless you\u0026rsquo;re actively troubleshooting. Enable it only when needed.\nVAMI → Access → SSH Login → Disabled\n4. Enable NTP and verify time sync # Clock skew breaks Kerberos auth and makes log correlation useless.\n# Check current NTP config timesync-ntp status # Set NTP servers (VAMI UI is easier for this) 5. Tighten TLS to 1.2+ only # Disable TLS 1.0 and 1.1. This is now default in vSphere 8 but double-check on upgraded deployments.\nVAMI → TLS Configuration → verify TLS 1.0 and 1.1 are disabled.\n6. Enable audit logging to syslog # Forward logs to a SIEM. Without centralized logging, forensics after an incident is nearly impossible.\n# vSphere Client → vCenter → Configure → Advanced Settings # Add: # config.log.host = udp://your-syslog:514 # config.log.level = info 7. Lock down firewall rules # By default vCenter accepts management connections from anywhere. Add firewall rules to restrict access to jump hosts / VPN ranges only.\n8. Configure session timeout # Default idle timeout is very long. Set it to 15–30 minutes.\nAdministration → Client Configuration → Session Timeout\n9. Enable FIPS 140-2 mode (if required) # For regulated environments. Note: enabling FIPS post-install requires a restart.\n10. Review and remove unused plugins # Every plugin is an attack surface. Remove anything you\u0026rsquo;re not actively using.\nAdministration → Client Plug-Ins → disable/remove unused\nQuick audit command # Run this from a jumphost to check which ports are actually open on your vCenter:\nnmap -sV -p 443,80,22,902,9443,5480 \u0026lt;vcenter-ip\u0026gt; Compare against VMware\u0026rsquo;s required ports documentation and close anything unexpected.\nThis list covers the 80% that matters most. Full CIS Benchmark for vSphere has 100+ controls — worth reading if you\u0026rsquo;re in a regulated environment.\n","date":"2 April 2026","externalUrl":null,"permalink":"/posts/vcenter-hardening-checklist-2026/","section":"Posts","summary":"","title":"vCenter hardening — practical checklist for 2026","type":"posts"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai","type":"tags"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/categories/ai--automation/","section":"Categories","summary":"","title":"AI \u0026 Automation","type":"categories"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/tags/gpu-passthrough/","section":"Tags","summary":"","title":"Gpu-Passthrough","type":"tags"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/tags/llm/","section":"Tags","summary":"","title":"Llm","type":"tags"},{"content":"","date":"20 March 2026","externalUrl":null,"permalink":"/tags/ollama/","section":"Tags","summary":"","title":"Ollama","type":"tags"},{"content":"I\u0026rsquo;ve been running local LLMs on my homelab for a few months now. The setup is surprisingly practical — fast enough for real use, completely private, zero API costs. Here\u0026rsquo;s how I did it.\nWhy local LLMs? # Privacy — no prompts sent to OpenAI/Anthropic. Useful when working with internal configs, scripts, or customer data. Cost — no per-token billing. Run as many queries as you want. Availability — works offline, not subject to API rate limits or outages. Customization — you can fine-tune models on your own data. The tradeoff: you need decent hardware and a bit of setup time.\nHardware I\u0026rsquo;m using # GPU: NVIDIA RTX 3060 12GB (passthrough to Ubuntu VM) Host: Intel N305 mini PC, 32GB RAM ESXi 8.0 U2 with GPU passthrough configured A GPU is not strictly required — Ollama runs on CPU too — but performance is dramatically better with one. On CPU only, LLaMA 3 8B takes ~10 seconds per token. With the 3060, it\u0026rsquo;s real-time.\nStep 1 — GPU passthrough on ESXi 8 # In vSphere Client:\nNavigate to Host → Configure → Hardware → PCI Devices Find your GPU, click Toggle Passthrough, reboot the host On the VM: Edit Settings → Add Other Device → PCI Device → select your GPU Set VM memory reservation to 100% (required for passthrough) Important: passthrough disables the GPU for the ESXi console. Use IPMI/iDRAC or a second display adapter for host management.\nStep 2 — Ubuntu VM setup # I use Ubuntu 24.04 Server. After install:\n# Install NVIDIA drivers sudo apt install nvidia-driver-550 -y # Verify GPU is visible nvidia-smi Step 3 — Install Ollama # curl -fsSL https://ollama.com/install.sh | sh # Pull a model (LLaMA 3 8B is a good start) ollama pull llama3 # Run it ollama run llama3 That\u0026rsquo;s it. Ollama handles model management, serving, and the API.\nStep 4 — Expose the API # Ollama serves a REST API on port 11434. I expose it inside my lab network (not to the internet):\n# Edit the systemd service sudo systemctl edit ollama # Add: [Service] Environment=\u0026#34;OLLAMA_HOST=0.0.0.0:11434\u0026#34; Now I can use the API from any machine on my network, including from Ansible playbooks and scripts.\nPractical uses I\u0026rsquo;ve found # Summarizing logs: paste a 500-line vCenter log → ask \u0026ldquo;what\u0026rsquo;s wrong here?\u0026rdquo; → works surprisingly well.\nWriting Ansible tasks: \u0026ldquo;Write an Ansible task to configure NTP on RHEL 8\u0026rdquo; → usually correct on first try.\nExplaining configs: paste an NSX firewall ruleset → ask \u0026ldquo;explain what this allows\u0026rdquo; → great for audits.\nModel recommendations # Model Size VRAM needed Good for llama3:8b 4.7 GB 6 GB General tasks, fast llama3:70b 40 GB 48 GB+ Complex reasoning (needs big GPU) mistral:7b 4.1 GB 6 GB Code generation codellama:13b 7.4 GB 10 GB Code only, better than base llama phi3:mini 2.2 GB 3 GB Lightweight, runs on CPU For a 12GB GPU, llama3:8b or codellama:13b are the sweet spots.\nThe whole setup took me about 2 hours. GPU passthrough on ESXi is the fiddly part — everything after that is straightforward.\n","date":"20 March 2026","externalUrl":null,"permalink":"/posts/ollama-local-llm-homelab/","section":"Posts","summary":"","title":"Running local LLMs on a homelab with Ollama","type":"posts"},{"content":"","date":"1 January 2026","externalUrl":null,"permalink":"/about/","section":"virtualles","summary":"","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"VMware architect. Writing about infrastructure that actually works — vSphere, homelab, security and AI. No theory, only things I\u0026rsquo;ve tested myself.\n","externalUrl":null,"permalink":"/authors/default/","section":"Authors","summary":"","title":"Szymon Leszega","type":"authors"}]