Pods provide instant access to powerful GPU and CPU resources for AI development, machine learning, rendering, and other compute-intensive workloads.

You have full control over your computing environment, allowing you to customize software, storage, and networking to match your exact requirements. Alternatively, you can use pre-configured templates that include ready-to-use environments for popular AI frameworks and applications.

When you’re ready to get started, follow this tutorial to create an account and deploy your first Pod.

Key components

Each Pod consists of these core components:

  • Container environment: An Ubuntu Linux-based container that can run almost any compatible software.
  • Unique identifier: Each Pod receives a dynamic ID (e.g., 2s56cp0pof1rmt) for management and access.
  • Storage:
    • Container volume: Houses the operating system and temporary storage.
    • Disk volume: Persistent storage that is preserved between Pod starts and stops.
    • Network volume (optional): Permanent, portable storage that can be moved between machines and persists even after Pod deletion.
  • Hardware resources: Allocated vCPU, system RAM, and multiple GPUs (based on your selection).
  • Network connectivity: A proxy connection enabling web access to any exposed port on your container.

Storage

Pods offer three types of storage to match different use cases:

Every Pod comes with a resizable container volume that houses the operating system and stores temporary files, which are cleared after the Pod stops.

Disk volumes provide persistent storage that is preserved throughout the Pod’s lease, functioning like a dedicated hard drive. Data stored in the disk volume directory (/workspace by default) persists when you stop the Pod, but is erased when the Pod is deleted.

Optional network volumes provide more flexible permanent storage that can be transferred between Pods, replacing the disk volume when attached. When using a Pod with network volume attached, you can safely delete your Pod without losing the data stored in your network volume directory (/workspace by default).

To learn more, see Storage options.

Deployment options

You can deploy Pods in several ways:

  • From a template: Pre-configured environments for quick setup of common workflows.
  • Custom containers: Pull from any compatible container registry such as Docker Hub, GitHub Container Registry, or Amazon ECR.
  • Custom images: Build and deploy your own container images.

When building a container image for RunPod on a Mac (Apple Silicon), use the flag --platform linux/amd64 to ensure your image is compatible with the platform.

Connecting to your Pod

Once deployed, you can connect to your Pod through:

  • SSH: Direct command-line access for development and management.
  • Web proxy: HTTP access to exposed web services via URLs in the format https://[pod-id]-[port].proxy.runpod.net.
  • API: Programmatic access and control through the RunPod API.
  • JupyterLab: A web-based IDE for data science and machine learning.
  • VSCode: Connect to your Pod through VSCode, working within your volume directory as if the files were stored on your local machine.

Data transfer

You can transfer data from your Pod to most major cloud providers, and to your local machine using the RunPod CLI.

To learn more about all available options, see Transfer files.

Customization options

Pods offer extensive customization to match your specific requirements.

You can select your preferred GPU type and quantity, adjust system disk size, and specify your container image.

Additionally, you can configure custom start commands, set environment variables, define exposed HTTP/TCP ports, and implement various storage configurations to optimize your Pod for your specific workload.

Pod types

RunPod offers two types of Pod:

  • Secure Cloud: Operates in T3/T4 data centers, providing high reliability and security for enterprise and production workloads.
  • Community Cloud: Connects individual compute providers to users through a vetted, secure peer-to-peer system, with competitive pricing options.

Deploy a Pod

Follow these steps to deploy a Pod:

  1. Choose a Pod based on your computing needs and budget.
  2. Navigate to the Pod creation page.
  3. Configure your Pod settings, including GPU type, storage, and networking options.
  4. Launch your Pod and connect using SSH, JupyterLab, or your preferred remote access method.
  5. Manage your Pod through the RunPod console.

Pricing

Pods are billed by the minute with no fees for ingress/egrees. RunPod also offers long-term savings plans for extended usage patterns. See GPU pricing for details.

Next steps

Ready to get started? Explore these pages to learn more: