Type "can my pc run it" into Google and you'll get a wall of game-compatibility checkers. Most haven't updated their AI predictions because none of them have AI predictions. They check whether your hardware meets a game's published minimum specs — Cyberpunk 2077, Hogwarts Legacy, GTA 6.
That's still useful, but it answers half the question. In 2026, the most asked version of "can my PC run it" isn't about a specific game. It's about whether your laptop can run Llama 13B locally without melting, whether Stable Diffusion XL is going to take 5 seconds or 5 minutes per image, or whether 4K video editing will be smooth or a slideshow.
9bench answers all of those. It runs a 15-second hardware test in your browser, then maps your measured performance to concrete capability tiers — for games, for AI workloads, for creative apps. Below is the full guide to interpreting your result.
Step 1 — Run the test (15 seconds, no install)
Open 9bench.com/test in any modern browser (Chrome 113+, Firefox 147+, Edge 113+, Safari 26+). The benchmark runs three measurements in sequence:
- GPU compute (~5s): WebGPU shader executes 1024×1024 matrix multiplication. Output: GFLOPS.
- CPU single + multi-core (~6s): SHA-256 hash chain on the main thread, then across all cores via Web Workers. Output: hashes/second + multi-core scaling efficiency.
- RAM bandwidth + latency (~4s): Sequential reads/writes + random-access pointer-chase on a 256 MB Float32Array. Output: GB/s + ns latency.
The composite 9bench Score is a weighted geometric mean of these (35% GPU, 45% CPU multi, 20% RAM). Geometric mean prevents a strong component from masking a weak one. Calibrated so that a typical 2024 mainstream laptop scores 1.000-1.500.
Step 2 — What your score tells you about games
Game performance correlates strongly (but not perfectly) with the GPU and CPU multi-core scores. Modern AAA titles are GPU-bound at high settings; older or eSports titles are CPU-bound. Here's the mapping from your 9bench Score to typical game capability:
S-tier (9bench Score ≥ 1.386) — Enthusiast / Workstation
You can run anything currently shipping at 1440p or 4K with high settings. GTA 6, Cyberpunk 2077 with path-tracing, Microsoft Flight Simulator 2024 — all comfortable. Frame rates depend on settings, but the bottleneck is rarely your hardware.
Examples in this tier: RTX 5090 / 4090 / 4080 desktops, RTX 4070 Ti+ desktops, Apple M3 Max / M3 Ultra, AMD Ryzen 9 + RX 7900 XTX systems.
A-tier (900-1.385) — Power user
1440p high settings on most games. AAA titles at 1440p with some compromises (medium-high textures, DLSS Quality). Older games (anything pre-2022) max out at 1080p with no issue.
Examples: RTX 4070 / 4060 Ti desktops, RTX 4070 Laptop, Apple M2 Max / M3 Pro.
Solid daily driver (600-899) — Comfortable for most games
1080p high settings runs comfortably. Newest AAA at 1080p medium with DLSS/FSR Performance. eSports titles (CS2, Valorant, League) at 1080p high deliver 144+ fps easily.
Examples: RTX 4060 / 4060 Laptop / 3070 desktops, Apple M2 Pro / M3, RX 6700 XT.
Working machine (300-599) — Office class
Older or lighter games run fine: Stardew Valley, Hades, indie titles, eSports at 1080p medium. AAA titles from 2020+ playable but require low/medium settings + upscaling.
Patient & honest (≤ 299) — Light duty
Office work, browser-everything, light gaming on integrated graphics. Cyberpunk 2077 will technically boot but isn't going to be fun. This tier is fine if your needs match — many people don't actually need more.
Step 3 — Can your PC run local AI models?
This is where 9bench differs from every other "can my PC run it" tool. We test specifically for AI workloads using a separate AI Capability Score that combines GPU compute, memory bandwidth, browser allocation headroom, and FP16 (shader-f16) detection.
The result page shows two views: Browser (what runs in your tab today via transformers.js / web-llm — sandbox memory limits + WebGPU translation overhead included) and Native (estimated tokens/s with llama.cpp, Ollama, or ComfyUI — assuming typical RAM headroom and direct GPU access).
Llama 7B (Q4 quantized) — the entry-level local LLM
Llama 7B at 4-bit quantization needs ~4 GB for weights + ~2 GB for KV cache = ~6 GB usable RAM. Native compute requirement: any GPU above 4 TFLOPS handles it.
- RTX 5090 / 4090 / M3 Ultra: ~100-200 tokens/s native
- RTX 4070 / M3 Max: ~50-80 tokens/s native
- RTX 4060 Laptop / M3 Pro / M2 Pro: ~25-45 tokens/s native
- Iris Xe / UHD Graphics integrated: ~5-12 tokens/s native (CPU-bound)
Browser-side (transformers.js): typically 5-15× slower due to WebAssembly overhead vs native CUDA/Metal/Vulkan. Still functional for testing — 9bench has a live LLM test button that downloads Phi-3-mini and runs real inference in your browser, so you can verify the prediction without leaving the page.
Llama 13B (Q4) — the productive local LLM
Needs ~8 GB for weights + ~3 GB cache = ~11 GB usable RAM. Most 2024+ laptops with 16 GB handle it. Tokens/s scales roughly with native GFLOPS:
- Apple Silicon Pro/Max: 30-60 t/s (unified memory advantage)
- RTX 4070+ desktops: 40-80 t/s
- RTX 4060 Laptop / mainstream desktops: 12-25 t/s
- Integrated graphics: usually too slow to be practical (3-8 t/s)
Llama 70B (Q4) — the flagship local LLM
Needs ~40 GB just for weights + cache. This is workstation territory: M2/M3 Ultra Macs (96-192 GB unified memory), workstation desktops with 64+ GB RAM. Browser-based execution is impossible (memory caps at ~8 GB).
Native machines with 32 GB can technically run via GGUF disk-streaming but it's slow (~1-3 t/s). For comfortable use you want 64+ GB system RAM or Apple Silicon Ultra-tier unified memory.
Stable Diffusion 1.5 (512×512)
Almost any modern hardware runs SD 1.5. Practical floor: ~2 TFLOPS GPU compute. Generation time for 512×512:
- RTX 4070+: 2-5s per image
- RTX 4060 / M2 Pro: 5-15s
- Iris Xe / older integrated: 30-90s
Stable Diffusion XL (1024×1024)
More demanding: ~6 GB VRAM minimum, ~10 TFLOPS for usable throughput. Native (ComfyUI/AUTOMATIC1111):
- RTX 5090 / 4090: 3-5s per image
- RTX 4080 / M3 Max: 6-12s
- RTX 4070 / 4060 Ti: 12-22s
- RTX 4060 Laptop / M2 Pro: 18-30s
- Older / integrated GPUs: usually impractical
Whisper Small (audio→text)
Whisper is light (244M parameters). Almost any hardware runs it real-time or faster. Apple Silicon with whisper.cpp: 5-10× faster than real-time. Mainstream GPUs: 3-6× faster than real-time. Even integrated graphics handle near-real-time.
Step 4 — Can your PC run modern productivity apps?
Photoshop / Lightroom / Affinity Photo
Light-to-medium GPU + 16 GB RAM is the sweet spot. Most laptops shipped after 2022 handle professional photo work without issue. Heavy generative-AI features (Photoshop's Generative Fill) are cloud-based, so local hardware doesn't matter for those specific features.
9bench Score floor: 600+ comfortable, 300+ functional with patience.
Premiere Pro / DaVinci Resolve / Final Cut Pro
Video editing is GPU-bound and RAM-hungry. 1080p editing is comfortable on any A-tier or above. 4K editing requires solid GPU performance + 32 GB RAM minimum + fast NVMe storage. RAM bandwidth matters a lot — your 9bench RAM score is a good proxy.
- 1080p editing: 9bench 600+, 16 GB RAM, any modern GPU
- 1440p editing: 9bench 1.000+, 16-32 GB RAM, RTX 4060 / M2 Pro+
- 4K editing real-time: 9bench 1.700+, 32 GB RAM, RTX 4070+ / M2 Max+
- 4K + multi-stream + heavy effects: workstation only — 9bench 2.500+
Blender / 3D rendering
Blender Cycles is GPU-bound. EEVEE is more forgiving but still benefits from a discrete GPU. Render times scale roughly linearly with GFLOPS. Your 9bench GPU compute (raw GFLOPS) is the direct predictor.
Code editing / dev work
VS Code, JetBrains IDEs, Docker, Node.js, etc. — almost entirely RAM-bound and CPU-bound for builds. 16 GB RAM minimum, 32 GB comfortable for big monorepos. CPU multi-core matters for compilation. 9bench's CPU·M score directly correlates.
Step 5 — Read your full result page
After your test runs, click your 9bench.com/r/[hash] permalink. The result page shows:
- 9bench Score + tier label (S/A/B / Working machine / Office class / Patient & honest)
- 4-card component breakdown: GPU / CPU·1 / CPU·M / RAM scores
- Detected hardware: GPU model (when browser allows), CPU cores + architecture, RAM (≈ approximate, browser-capped)
- AI Capability section: dual-track Browser + Native scores, FP16 detection, memory headroom probe, 6 concrete predictions (Llama 7B/13B/70B, SDXL, SD 1.5, Whisper)
- Live LLM test button: optional — downloads transformers.js + a tiny model and measures real tokens/s in your browser. Verifies our predictions with measurement.
- Trading Card: shareable PNG export at 1080×1350 — perfect for Twitter / Reddit / LinkedIn
What 9bench doesn't tell you
Truth Series principle: be transparent about limits.
- Sustained / thermal performance: 15 seconds is a sprint. Long renders, gaming sessions, or prolonged inference may throttle on laptops. We don't measure that.
- Disk / NVMe speed: not testable in browser. Use CrystalDiskMark or similar native tool if you suspect storage is your bottleneck.
- Network speed: different problem entirely. Test at fast.com or speed.cloudflare.com.
- Specific game frame rates: we predict tier capability, not Cyberpunk 2077 average fps. For specific game predictions, use a service that aggregates tested benchmarks (CanYouRunIt, GPU benchmark databases).
- Browser-cap on RAM scores: V8/SpiderMonkey/JavaScriptCore sandbox memory access. Browser RAM scores typically run at 30-50% of native bandwidth. We say so on every result page; the score is comparative across machines, not absolute.
Common questions about hardware capability
"Is my laptop too slow for ChatGPT-level AI locally?"
ChatGPT's specific model weights aren't public, but the rough equivalent for local use is Llama 13B or larger. If your 9bench AI Capability Score is 1.000+ (browser) or 2.500+ (native), Llama 13B runs comfortably. Below 600, consider sticking with cloud-based ChatGPT/Claude — local AI will feel slow.
"Can my old gaming PC handle modern AI workloads?"
Surprisingly often, yes. The GPU you bought for gaming in 2020 is usually a respectable AI machine in 2026. RTX 2080, RTX 3060, RX 5700 XT — all run Llama 7B Q4 at usable speeds (15-30 t/s) and SD 1.5 image generation in 5-15 seconds.
"Should I upgrade or is my hardware fine?"
Test before you buy. Your specific bottleneck might not be what you think. RAM bandwidth often limits LLM inference more than GPU compute. CPU multi-core often limits compile/render times more than peak frequency. 9bench shows you which component is actually constraining you — look at the Component Imbalance Callout on your result page.
Try it now (15 seconds)
Open 9bench.com/test in your current browser. Don't switch tabs during the test — browsers throttle background tabs and that will skew your score. After the test, share your permalink — your AI Capability Score and game tier travel with the link so anyone can see your hardware capability without re-running the test.
No download. No account. No bias. Just the honest answer to "can my PC run it" — for whatever it means in 2026.