Search Results

Don’t use Storage Spaces…

… if you care about performance in the slightest bit. That’s it really. You don’t need to read any further. What are storage spaces? Have a read of this quick overview: https://www.windowscentral.com/how-use-storage-spaces-windows-10 I had some spare computer parts laying around so I thought I’d rebuild my Windows 10 desktop at home. I have 4 x 4TB Hitachi SATA drives and a hardware RAID controller spare so decided to put them in my desktop. I had heard of storage spaces and wanted to try it out to see how performance would be considering there was no extra hardware involved in creating

Uploading Files To Data Lake Storage With PowerShell Part Three

Picking up from where we left off last month, we’re today we’re looking at setting the Azure Data Lake Storage account. This post is part of a series on automating the process of uploading files to Azure Data Lake Store , Although the entire script is available on Git (posted below) I’m going to go into one function per post so that I can go in greater depth. Part One of this blog series focused on logging in to an Azure Subscription. Part Two focused on setting the Resource Group. As mentioned, today’s function starts on row 74 and is

Uploading Files To Data Lake Storage With PowerShell Part Two

Carrying on from our previous post on automating the process of uploading files to  Azure Data Lake Store , we will check if a Resource Group exists, and if it does not then it will create it. Although the entire script is available on Git (posted below) I’m going to go into one function per post so that I can go in greater depth. Part One of this blog series focused on logging in to an Azure Subscription. Today’s function starts on line 42 and is called Set-AzureResourceGroup. Before we go into it though, I want to take a moment

In a partitioned world, don’t violate core directive

This is another short post steming from a recent talk I gave on Azure Cosmos Db vs. SQL Database, and there will be more based on discussion and feedback I received and things I learnt along the way. The point I want to make is that when implementing a scale out data storage then regardless of whether you are considerng Azure SQL Database, Cosmos Db or another storage engine, you have to think differently about your read and write patterns. To paraphrase Conor Cunningham linkedin | blog from his excellent OLTP Sharding Techniques for Massive Scale presetation at SQL PASS

Overview of Azure Virtual Machine IO performance and throttles

In this post we are going to look at the IO performance of a Virtual Machine in Azure. We are specifically talking about the GS 4 machines with premium managed disks. The theory should apply to all classes of machine but some such as the L series have a different configuration for the temporary drive which is important. The data in this post has been gathered using a mixture of this excellent post https://blogs.technet.microsoft.com/xiangwu/2017/05/14/azure-vm-storage-performance-and-throttling-demystify/ and generating IO using diskspd and measuring using perfmon. Speed vs Throughput It is worth pointing out that Azure specifies the performance of the Disks /

CosmosDb, know your costs, and remember…

This will be a short post to emphasize a simple point, yet one that should make an enormous difference to how you approach configuring a CosmosDb collection and modelling documents to support read and write requirements. Know your costs I cannot emphasize this point enough. The folks at Microsoft have made this really easy, be it via the Request Units (RU) and Data Storage calculator , the collection Query Explorer through the Azure Portal or a REST client such as Postman coupled with the really useful library and samples by a Microsoftie over on git documentdb postman collection . Let’s

The problem with TDE and the challenge of T

I recently gave a SQL Supper talk as part of the Microsoft Future Decoded evening community events, and I made the point of not being impressed by Transparent Data Encryption (TDE), be it SQL Server, Azure SQL Database or Cosmos Db. I would like to explain why. The problem of TDE I have worked with data and storage engines for some time and therefore TDE seems straight-forward to me. I think a good overview of TDE for SQL Server, Azure SQL Database and Azure SQL Data Warehouse is given here , and I think a similarly good overview of TDE

Using a Cloud Witness for Clusters

On a client site recently a question was asked about the file share witness in a SQL Server failover cluster on-premise, and where to put it if you only have 2 sites. As always, it depends! Let’s look at some scenarios. Bear in mind that use of the Azure Cloud Witness requires Windows Server 2016 or later. Topology 1 Three node cluster with 2 nodes at primary and 1 at disaster recovery (DR). Most people want high availability at their “primary site” and are happy to have standalone capability at the business continuity (DR) site. To save storage costs, I

CosmosDb, know your partition costs, well more or less

In my previous post Cosmos Db know your costs, and remember I made the point that by understanding RU costs early, you can make informed decisions in relation to document design and application CRUD and query operations. While it is easy and most certainly useful to arrive at a projected RU cost, using for example, the Request Units (RU) and Data Storage calculator or directly against a fixed 10GB collection via the Azure Portal (incidentally the same costs), the problem is these do not highlight RU costs when partitioning is required to support scale-out . Now if you know your

Azure Automation Module Import: Sorry! You’re not my type.

I’ve just burnt several hours on an issue with importing powershell modules into an Azure automation account using the New-AzureRmAutomationModule cmdlet.  Although I could import the module with no issue using the portal, I was getting an error of “Import newer version failed” when trying to do it using the cmdlet, and when I checked the module in the portal the message was: “Error importing the module cAsosSQLAddons. Import failed with the following error: Orchestrator.Shared.AsyncModuleImport.ModuleImportException: No content was read from the supplied ContentLink. [ContentLink.Uri=https://xxxxx.blob.core.windows.net/dscresources/DSCResources .cXXXXSQLAddons/1.0.9/cXXXXSQLAddons.zip]” When importing a module using New-AzureRmAutomationModule , one of the parameters you must supply is

CosmosDb and CRUD with a S

In preparation for my talk on Azure SQL Database vs. CosmosDb - How do you choose   at giving a talk at sqlbits I spent a lot of time thinking about create, read, update and delete (CRUD) operations, and concluded that with the advent of CosmosDb and its multi-model abstraction layer, CRUD needs a S. CRUD as an acronym has served software engineering very well by encapsulating the core actions of data persistence and retrieval. In the transition from the “ye olde” closed and monolithic systems to the current open, loosely coupled and distributed micro-service architectures via client server and

Why is Sqlpackage Using All The Build Server Memory?

Sqlpackage can be particularly resource-intensive when scripting a database that has a considerable amount of objects. In this post I'm going to discuss the options available when scripting out a database deployment file from a dacpac when using sqlpackage.exe. I'm also going to investigate how resource intensive they are and what we can do to limit the hardware resources used and how much of an impact this has on our waiting times, with some interesting results on where we were taking the performance hit. Recently I've noticed that when we have more than one build running at the same time

SQL Server Container Performance

Is SQL Server in a container faster than a VM? I briefly looked at SQL Server containers when Windows Server 2016 was released. Containers offer the ability for rapid provisioning, and denser utilization of hardware because the container shares the base OS’s kernel. There is not a need for a Hyper-Visor layer in between. As a recap for those that are not up speed with containers, the traditional architecture of databases in a VM is like so: The Hyper-Visor OS is installed onto the host hardware, a physical server in the data centre. Many VMs are created on the Hyper-Visor

SSIS Package Execution In Azure Is Now Available

Well, it’s been some time coming but SSIS packages are the latest product to make the move from on premise to Azure. You can now take your SSIS projects and deploy them to the new Platform as a Service (PaaS) offering in Azure. The aim of the team at Microsoft was for users to take their current SSIS packages and just “lift and shift” these to Azure. So in development terms that means that there are minimum to no changes to be made in the solution at least. But before we get into the deployment and running of SSIS packages