For information on pricing, see GPU pricing.

GPU types

This table lists all GPU types available on RunPod:

GPU IDDisplay NameMemory (GB)
AMD Instinct MI300X OAMMI300X192
NVIDIA A100 80GB PCIeA100 PCIe80
NVIDIA A100-SXM4-80GBA100 SXM80
NVIDIA A30A3024
NVIDIA A40A4048
NVIDIA B200B200180
NVIDIA GeForce RTX 3070RTX 30708
NVIDIA GeForce RTX 3080RTX 308010
NVIDIA GeForce RTX 3080 TiRTX 3080 Ti12
NVIDIA GeForce RTX 3090RTX 309024
NVIDIA GeForce RTX 3090 TiRTX 3090 Ti24
NVIDIA GeForce RTX 4070 TiRTX 4070 Ti12
NVIDIA GeForce RTX 4080RTX 408016
NVIDIA GeForce RTX 4080 SUPERRTX 4080 SUPER16
NVIDIA GeForce RTX 4090RTX 409024
NVIDIA GeForce RTX 5080RTX 508016
NVIDIA GeForce RTX 5090RTX 509032
NVIDIA H100 80GB HBM3H100 SXM80
NVIDIA H100 NVLH100 NVL94
NVIDIA H100 PCIeH100 PCIe80
NVIDIA H200H200 SXM141
NVIDIA L4L424
NVIDIA L40L4048
NVIDIA L40SL40S48
NVIDIA RTX 2000 Ada GenerationRTX 2000 Ada16
NVIDIA RTX 4000 Ada GenerationRTX 4000 Ada20
NVIDIA RTX 4000 SFF Ada GenerationRTX 4000 Ada SFF20
NVIDIA RTX 5000 Ada GenerationRTX 5000 Ada32
NVIDIA RTX 6000 Ada GenerationRTX 6000 Ada48
NVIDIA RTX A2000RTX A20006
NVIDIA RTX A4000RTX A400016
NVIDIA RTX A4500RTX A450020
NVIDIA RTX A5000RTX A500024
NVIDIA RTX A6000RTX A600048
Tesla V100-FHHL-16GBV100 FHHL16
Tesla V100-PCIE-16GBTesla V10016
Tesla V100-SXM2-16GBV100 SXM216
Tesla V100-SXM2-32GBV100 SXM2 32GB32

GPU pools

The table below lists the GPU pools that you can use to define which GPUs are available to workers on an endpoint after deployment.

Use GPU pools when defining requirements for repositories published to the RunPod Hub, or when specifying GPU requirements for an endpoint with the RunPod GraphQL API.

Pool IDGPUs IncludedMemory (GB)
AMPERE_16A4000, A4500, RTX 4000, RTX 200016 GB
AMPERE_24L4, A5000, 309024 GB
ADA_24409024 GB
AMPERE_48A6000, A4048 GB
ADA_48_PROL40, L40S, 6000 Ada48 GB
AMPERE_80A10080 GB
ADA_80_PROH10080 GB
HOPPER_141H200141 GB