Troubleshoot full disks and disk resizing


This page describes common issues that you might run into when resizing a persistent disk or when your persistent disk is full, and how to fix each of them.

Before you begin

  • Always create a snapshot of your disk before performing any troubleshooting steps to ensure that your data is backed up.
  • If you haven't already, then set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Rate limited error when modifying a disk

The following are common errors you might encounter when you attempt to modify your Extreme Persistent Disk or Google Cloud Hyperdisk. You might see these errors appear in a number of places, such as in your serial console output or in application logs.

Disk cannot be resized due to being rate limited.
Cannot update provisioned iops due to being rate limited.
Cannot update provisioned throughput due to being rate limited.

Review the following time limits for modifying disks:

  • You can resize an Extreme Persistent Disk or Hyperdisk Throughput volume only once in a 6 hour period.
  • You can resize a Hyperdisk Extreme volume only once in a 4 hour period.
  • You can change the provisioned IOPS or throughput for a Hyperdisk volume only once in a 4 hour period.

To resolve these errors, wait the required amount of time since your last modification before attempting to modify the disks again.

Disk capacity errors

Full disks

The following are common errors you might encounter when your persistent disk reaches full capacity. You might see these errors appear in a number of places, such as in your serial console output or in application logs.

No space left on device
Not enough storage is available to process this command

To resolve this issue, do the following:

  1. Create a snapshot of the disk.

  2. Delete files that you don't need on the disk to free up space.

  3. If your disk requires more space after this, resize the disk.

Inaccessible VM due to full boot disk

Your VM might become inaccessible if its boot disk is full. This scenario can be difficult to identify; it's not always obvious when the VM connectivity issue is due to a full boot disk. The following are examples of common errors you might encounter if you cannot access your VM from the Google Cloud CLI because the boot disk is full:

  Network error: Software caused connection abort
  
  ERROR: (gcloud.compute.ssh) Could not SSH into the instance.  It is possible
  that your SSH key has not propagated to the instance yet. Try running this
  command again.  If you still cannot connect, verify that the firewall and
  instance are set to accept ssh traffic.
  
  You cannot connect to the VM instance because of an unexpected error. Wait a
  few moments and then try again.
  
  No space left on device
  
  ERROR Exception calling the response handler. [Errno 2] No usable temporary
  directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']...
  

To resolve the above issues, do the following:

  1. Confirm that the VM's SSH failure is due to a full boot disk:

    gcloud compute instances tail-serial-port-output VM_NAME
    

    If the boot disk is full, the resulting output will contain the message No space left on device.

  2. If you have not already done so, create a snapshot of the VM's boot disk.

  3. Try to restart the VM.

  4. If you still can't access the VM, do the following:

    1. Stop the VM:

      gcloud compute instances stop VM_NAME
      

      Replace VM_NAME with the name of your VM.

    2. Increase the size of the boot disk:

      gcloud compute disks resize BOOT_DISK_NAME --size DISK_SIZE
      

      Replace the following:

      • BOOT_DISK_NAME: the name of your VM's boot disk
      • DISK_SIZE: the new larger size, in gigabytes, for the boot disk

      For example, to resize a disk named example-disk-1 to 6GB, run the following command:

      gcloud compute disks resize example-disk-1 --size=6GB
      
    3. Start the VM:

      gcloud compute instances start VM_NAME
      
  5. Reattempt to SSH to the VM. If you still can't access the VM, do one of the following:

File system issues

File system resize

After you resize a VM boot disk, most VMs resize the root file system and restart the VM. However, for some VM images types, you might have to resize the file system manually. If your VM does not support automatic root file system resizing, or if you've resized a data (non-boot) persistent disk, you must manually resize the file system and partitions.

To check if your root file system expanded automatically after you resized your VM boot disk, do the following:

  1. Check if your VM resized the boot disk using one of the following methods:

    • Inspect your serial port output. Look for a line that indicates the root partition was resized.

      For example, on VMs with Debian images, if the automatic resize was successful then the console logs include the line ... expand-root.sh[..]: Resizing ext4 filesystem on /dev/sda1.

    • If you can connect to a Linux VM using SSH, run the command df -h to check if there is free disk space.

      For example, this output shows that the root file system is 92% full:

      Filesystem                                    Size  Used Avail Use% Mounted on
      udev                                           63G     0   63G   0% /dev
      tmpfs                                          13G  1.4M   13G   1% /run
      /dev/sda1                                     339G  315G   24G  92% /
      
  2. If your VM didn't resize the root file system, manually resize the file system and partitions.