I squeezed my iGPU dry, then added an eGPU — a GPU buying guide for AI on mini PCs

keeper
Last month, I hit a wall with my local LLM setup. Here's the full story — from software optimization to OCuLink eGPU to picking the right RTX 5060 Ti 16GB, with real pricing and brand teardown data. Not a review. A decision log. The problem My machine — call it T2 — is a Minisforum AI X1 Pro (AMD Ryzen AI 9 HX 370, 96GB RAM). It runs LM Studio with Gemma 4 E4B and Peach 2.0 for local inference. The Radeon 890M iGPU is decent. But shared memory architecture is a hard ceiling: Bandwidth: ~120 GB/s