Image
{{'2017-09-15T12:58:45.3289791Z' | utcToLocalDate }}
Ed Elliott

Overview of Azure Virtual Machine IO performance and throttles

In this post we are going to look at the IO performance of a Virtual Machine in Azure. We are specifically talking about the GS 4 machines with premium managed disks. The theory should apply to all classes of machine but some such as the L series have a different configuration for the temporary drive which is important.

The data in this post has been gathered using a mixture of this excellent post https://blogs.technet.microsoft.com/xiangwu/2017/05/14/azure-vm-storage-performance-and-throttling-demystify/ and generating IO using diskspd and measuring using perfmon.

Speed vs Throughput

It is worth pointing out that Azure specifies the performance of the Disks / IO in terms of throughput, you get X IOPS or X IO’s per second. Each IO is measured at 256KB so if you send 255KB in an IO it is 1 IO and if you send 257KB in one IO it is 2 IO’s. If you don’t send an exact multiple of 256KB then you are incapable of getting the peak performance.

Latency isn’t specified anywhere for managed disks, on the pricing page it has the statement that you will get “less than one millisecond latency for read operations” and other sources such as the demystifying post, referenced above and in the references section, say latency is around 2ms.

In our tests it looks like that 1-2ms is cached data otherwise latency is around 5ms, but this is very likely to be region dependent – if you care whether you get between 1 and 5 millisecond latency then you will need to test and monitor the performance.

Throttles

What is it that determines the read and write performance and throughput to disks from virtual machines in Azure?

There are three factors that affect the performance:

  • The Disk Size / Speed
  • Disk Caching
  • Virtual Machine IO throttles

Disk Size / Speed

The first, disk size is the easiest to understand and reason about. In general terms, the larger the disk the more throughput – currently, the 2TB and 4TB disks (P40 and P50) have the same throughput and performance but for every other disk size the larger and therefore the more expensive, the more throughput.

The disks are throttled according to the size, the documentation for a P30 shows that it can handle 5,000 IOPs per disk and a throughput per disk of 200 MB/second. The throughput is much lower than the available IOPs would allow, the IOPs limit only really comes into effect if you are sending lots of really small IO’s.

There isn’t a lot of point optimizing your IO to hit the 256KB IO size as even though the disks can handle 5,000 IOPs the throughput is throttled much lower so (5,000 IOPs x 256 KB ) / 1024 = 1,250 MB/second so as long as your IO’s are at least 40 KB you will be capped by the throughput limit.

The throughput throttles for disks do not apply to IO’s served from cache, instead these are limited by the Virtual Machine throttles, for the GS series that is 50MB/sec/core.

Virtual Machine

The VM throttles are where it starts to get a little more interesting, like the disks the more you pay the more you get. A GS4 has a “Max cached and temp storage throughput: IOPS” of 80,000 and throughput of 800 MBps with a cache size of 2,112 GB. The GS 4 also has a “Max un-cached disk throughput: IOPS” of 40,000 and 1,000 MBps.

When you add a disk to a virtual machine you have the choice of no caching, read only caching or read and write caching. The temporary drive on most series comes with Read and Write caching enabled and you cannot disable it. We will ignore the temporary drive for a minute and pretend it doesn’t exist (someone unassigned the drive letter etc).

What happens is that you get up to 1,000 MB/sec for disks attached with no caching.

If you add disks with some form of caching on then they get up to 800MB/sec but it is one big bucket, you don’t get a combined throughput of 1,800 MB/sec. The total IO across cached and un-cached disks is 1,000 MB/sec.

If we add back in the temporary drive things start to get more complicated. The disk has for the GS series read and write caching enabled. This means it is limited to 800 MB/sec and that 800 MB/sec is shared with any other disks that have been added which also have caching enabled *but* the throughput does not come from the un-cached “bucket” so you can actually get up to 1,800 MB/sec throughput if you use 800MB/sec for the temporary disk and 1000 MB/sec on un-cached disks.

The local disk is also fast, the only downside is that it is limited to 800MB/sec so you get a choice of max throughput but slower with un-cached disks or lower throughput but faster with the local disk.

Choosing Disks

Choosing the disks to get the optimum performance is basically a game of deciding whether you would prefer to have a higher steadier throughput or slightly less throughput with possibly faster disk access, if your data is in cache.

Using the temporary local disk and un-cached disks:

Local Disk = 800MB/sec max throughput, low latency ~2 ms

  • 4 x P40’s = 4 x 250MB/sec throughput – 1,000 MB/sec, higher latency ~5 ms

(The latency numbers are from our testing, you must verify your own figures)

In this scenario the max combined throughput is ~1,800MB/sec but you are limited to a size of about 448 GB disk on the cached disk and as the cache that sits behind the disks is 2,112GB it is a bit of a waste.

The disks are throttled but if you have caching enabled then you can get more throughput than the disk allows as anything served from the cache doesn’t count towards the disk throughput although it still applies to the machine throughput throttle so bandwidth is not unlimited.

It isn’t really straight forward how to configure your disks but once you understand the different throttles and limits and you truly understand how your application uses IO then it is pretty simple to design the disks for performance in Azure – one positive is there are limited choices and so you get what you get or you pay to get more!

Monitoring throttling

There isn’t one specific way to find out if you are getting throttling – if you are getting poor performance look at the latency you are getting to each disk, the best way is to use the perfmon counter “Physical Disk\Avg. Disk sec/Transfer” to show the latency per disk and “Physical Disk\Bytes/sec” to show the throughput – if on a disk you are getting near the per disk limit then you are probably being throttled – if you are getting the cached or un-cached limit bearing in mind you will need to manually put together the numbers into the cached/un-cached buckets.

You can also raise a call with Microsoft and ask them if you were throttled between a timerange and they will tell you.

This recent post talks about using alerts in the portal and using guest diagnostics to alert on throttle limits: https://blogs.msdn.microsoft.com/mast/2017/09/12/how-to-monitor-and-alert-potential-disk-and-vm-level-io-throttling-on-windows-vms-using-arm/

References

Azure VM Sizes - Sizes for Windows virtual machines in Azure - https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

Managed Disk Storage - https://azure.microsoft.com/en-gb/pricing/details/managed-disks/

High-performance Premium Storage for VM Disks - https://docs.microsoft.com/en-us/azure/storage/common/storage-premium-storage

Azure VM Storage Performance and Throttling Demystified - https://blogs.technet.microsoft.com/xiangwu/2017/05/14/azure-vm-storage-performance-and-throttling-demystify/

comments powered by Disqus