This guide explains how to fine-tune a large language model using Runpod and Axolotl. You’ll learn how to select a base model, configure your training environment, and start the fine-tuning process.
Before you begin fine-tuning, ensure you have:
To start fine-tuning, you’ll need to choose a base model from Hugging Face:
Navigate to the Fine Tuning section in the sidebar
Enter the Hugging Face model ID in the Base Model field
NousResearch/Meta-Llama-3-8B
For gated models (requiring special access):
You can choose a dataset from Hugging Face for fine-tuning:
tatsu-lab/alpaca
Follow these steps to set up your training environment:
Click Deploy the Fine Tuning Pod
Select a GPU instance based on your model’s requirements:
Monitor the system logs for deployment progress
Wait for the success message: "You've successfully configured your training environment!"
After your pod is deployed and active, you can connect using any of these methods:
Go to your Fine Tuning pod dashboard
Click Connect and choose your preferred connection method:
To use SSH, add your public SSH key in your account settings. The system automatically adds your key to the pod’s authorized_keys
file.
Your training environment includes this directory structure in /workspace/fine-tuning/
:
examples/
: Sample configurations and scriptsoutputs/
: Training results and model outputsconfig.yaml
: Training parameters for your modelThe system generates an initial config.yaml
based on your selected base model and dataset.
The config.yaml
file controls your fine-tuning parameters. Here’s how to customize it:
Open the configuration file:
Review and adjust the parameters based on your specific use case
Here’s an example configuration with common parameters:
The config.yaml
file contains all hyperparameters needed for fine-tuning. You may need to iterate on these settings to achieve optimal results.
For more configuration examples, visit the Axolotl examples repository.
Once your configuration is ready, follow these steps:
Start the training process:
Monitor the training progress in your terminal
After completing the fine-tuning process, you can share your model:
Log in to Hugging Face:
Create a new repository on Hugging Face if needed
Upload your model:
Replace <your-username>
with your Hugging Face username and <model-name>
with your desired model name.
For more information about fine-tuning with Axolotl, see: