Search Results

Azure and Guest OS Families

Recently one of our clients came to us with an issue: the Azure SDK 2.7.8 was being retired and needed updating for their Azure Cloud Services. OK, simple enough, but the issue was that the latest versions of the Azure SDK require .NET 4.6.2 installed to work. You see, when you deploy a Cloud Service to Azure, it deploys it to a VM that is spun up as part of the deployment process. And by default, Windows Server 2012 R2 does not come with .NET 4.6.2 installed. So the problem was, how do you get 4.6.2 installed on that flavour

Improving Azure Functions throughput

I recently ran into an Azure Functions throughput problem which I logged on Stack Overflow as regular-throughput-troughs-in-azure-functions-requests-per-second . The product group were pretty quick to respond and pointed me to their Azure App Service Team Blog post processing-100000-events-per-second-on-azure-functions/ . The post lists five notable configuration choices: functions process [event hubs] messages in batches webJobs dashboard is disabled in favor of using Application Insights for monitoring and telemetry each event hub is configured with 100 partitions data is sent to the event hubs without partition keys events are serialized using protocol buffers Of these, the second and third are most interesting.

Getting started with Azure Policy

Recently as part of a data classification implementation certain aspects were implemented using Azure Policy, since is supports auditing of existing resources and can prevent non-compliant resources from being created in the first instance. Like most things, the documentation reads well and the samples seem useful, until one has to do something different and go off-piste. There are plenty of VM and tagging samples though not so many for Azure SQL Database. Therefore as Sabin is a data engineering consultancy, all examples given here will be Azure SQL Database focused and not the canonical VM as found in many examples.

Managing Azure Functions logging to Application Insights

The Azure Functions teams have made it incredibly easy to emit telemetry to Application Insights. It really is as easy as update the Function App’s settings as described by the App Insights wiki page over at Azure Functions on github. However, if you are on the basic pricing plan for Application Insights then the 32.3Mb daily allowance gets used up pretty quickly. The remainder of this post is about understanding the telemetry data sent to Application Insights by Azure Functions and how to configure the function app host.json to filter and reduce the volume of telemetry sent. The naïve approach

ARM Template Snippet - Allow Azure Resources To Access Azure SQL Instance With This One Simple Trick!

  Hello! I’m really getting into using ARM Templates to deploy Azure Resources. For those of you not ITK, and for the sake of brevity I’ll summarise in a sentence, ARM Templates are basically Infrastructure as Code. It’s all very interesting, go and have a read . In addition to having our code in source, the ability for us to deploy our entire infrastructure from source into Azure is fantastic, because now we can run one single script to build deploy the infrastructure and the code from source, and we can all have our own personal environment in Azure. And

Migrating SSIS Packages to SSIS Azure

Hello! In case you missed the announcement (and there were a lot of announcements during MSIgnite), SQL Server Integration Services is in Public Preview on Azure! I’ve written about it elsewhere in greater depth , but here are the headlines: It makes use of SSIS Scale Out , which was released as part of SQL Server 2017 . Although it is based on SSIS Scale Out, you can’t actually configure SSIS Scale Out to run on the instance. If this confuses you then read my in-depth post. SSISDB is installed in either SQL Azure or on a Managed Instance. You

Running SQL Server in an Azure Container Instance

Azure Container Instances are still in Preview and not officially available for Windows yet, which made me smile. It took me a while to figure out how to get this working so I thought I’d share what I’ve found. Containers are great for lightweight testing of code before deployment to production servers because they can be created so quickly and they give the same environment to test in very reliably. Now that Microsoft is offering container instances in Azure it means you don’t have to worry about provisioning and configuring your own docker host/cluster. The options for deploying SQL Server

SSIS Package Execution In Azure Is Now Available

Well, it’s been some time coming but SSIS packages are the latest product to make the move from on premise to Azure. You can now take your SSIS projects and deploy them to the new Platform as a Service (PaaS) offering in Azure. The aim of the team at Microsoft was for users to take their current SSIS packages and just “lift and shift” these to Azure. So in development terms that means that there are minimum to no changes to be made in the solution at least. But before we get into the deployment and running of SSIS packages

Overview of Azure Virtual Machine IO performance and throttles

In this post we are going to look at the IO performance of a Virtual Machine in Azure. We are specifically talking about the GS 4 machines with premium managed disks. The theory should apply to all classes of machine but some such as the L series have a different configuration for the temporary drive which is important. The data in this post has been gathered using a mixture of this excellent post https://blogs.technet.microsoft.com/xiangwu/2017/05/14/azure-vm-storage-performance-and-throttling-demystify/ and generating IO using diskspd and measuring using perfmon. Speed vs Throughput It is worth pointing out that Azure specifies the performance of the Disks /

Fix For Using Azure Active Directory and DacFX

Hello! As part of an SSDT project we have a contained user that authenticates against an Azure Active Directory group (read more on the CREATE USER page). However the account we are executing deployments with is the SQL Admin account on the Azure SQL Instance. And so we get this error - The executed script: CREATE USER [myUser] FOR EXTERNAL PROVIDER; ' Reason: '' At line:94 char:13 + Throw $toThrow + ~~~~~~~~~~~~~~ + CategoryInfo : OperationStopped: (Deployment fail... ' Reason: '':String) [], RuntimeException + FullyQualifiedErrorId : Deployment failed: 'Could not deploy package. Error SQL72014: .Net SqlClient Data Provider: Msg 33159,

Create an Azure Active Directory Application and Key using PowerShell

I’ve been a SQL developer for a good few years now, and have also developed numerous web applications, web services and various console apps. However, lately I find myself getting into the world of DevOps, Azure, and necessarily, PowerShell. Whilst familiar with PowerShell to a degree, I’ve learnt a lot over the past few weeks about the Azure PowerShell module, and how we can use it to script tasks that you might not want to do manually in the Azure portal if you’re thinking about automation. This post should help if you want to create an Azure Active Directory application

Automatic Tuning Enabled By Default In Azure SQL Database Happens Today!

Hello! From January 15th (ie, this Monday) Automatic Tuning will be enabled by default and gradually rolled out to ALL Azure subscriptions. If you are the owner of a subscriber you would have received an email two weeks ago alerting you of this fact. However if you’re not, and this has not been communicated out to you, this may be something of a surprise. With regards to how the rollout impacts you, this blog post states that “All servers that do not have automatic tuning explicitly configured will inherit Azure defaults, making automatic tuning enabled. Similarly, all databases that do

Azure Powershell 4.0 may break your scripts

Ensuring backwards compatibility is something that one has to consider very carefully when doing continuous delivery. We are all to well aware of the challenges of this with database systems as, generally, the database lives much longer than the apps that interact with it and thus one has to maintain the data. SQL has far too many “legacy features” that can’t be changed due to potential breaking changes. Thankfully the SQL team now have a more robust way of managing change and that’s through the compatibility level for the database. This allows you to upgrade to the latest runtime but

Do Azure SQL Database External Tables have a place in a micro-service?

I was recently in discussions on using External Tables to link Azure SQL Databases across micro-service boundaries. This has led to some challenging discussions with a client and unexpected opinions internally here at sabin.io . My simple view of a micro-service is of a data store fronted by code, which is in turn behind an API or message subscriber. Importantly only this code accesses the store. I have arrived at this opinion though many (often heated) discussions with developers implementing services, and though working with teams breaking large services into micro-services to clarify ownership and responsibility, remove dependencies and simplify

Azure Automation Module Import: Sorry! You’re not my type.

I’ve just burnt several hours on an issue with importing powershell modules into an Azure automation account using the New-AzureRmAutomationModule cmdlet.  Although I could import the module with no issue using the portal, I was getting an error of “Import newer version failed” when trying to do it using the cmdlet, and when I checked the module in the portal the message was: “Error importing the module cAsosSQLAddons. Import failed with the following error: Orchestrator.Shared.AsyncModuleImport.ModuleImportException: No content was read from the supplied ContentLink. [ContentLink.Uri=https://xxxxx.blob.core.windows.net/dscresources/DSCResources .cXXXXSQLAddons/1.0.9/cXXXXSQLAddons.zip]” When importing a module using New-AzureRmAutomationModule , one of the parameters you must supply is

Migrating SSIS Packages to SSIS Azure part Two – Automating the Deployment

Hello! If you’ve read and followed through my previous post, you will have World Wide Importers Integration Services project running in SSIS Azure. It’s all very interesting, go and have a read . One thing that is missing form that guide, the documentation, and SSIS in general, is how to automate SSIS Deployments. In the WWI SSIS project, there are connection managers that we had to manually update the values of to get it to work post-deploy. This is exactly the opposite of what we want to do. Back when SQL Server 2012 was known as Denali, one of the

Uploading Files To Data Lake Storage With PowerShell Part Three

Picking up from where we left off last month, we’re today we’re looking at setting the Azure Data Lake Storage account. This post is part of a series on automating the process of uploading files to Azure Data Lake Store , Although the entire script is available on Git (posted below) I’m going to go into one function per post so that I can go in greater depth. Part One of this blog series focused on logging in to an Azure Subscription. Part Two focused on setting the Resource Group. As mentioned, today’s function starts on row 74 and is

The problem with TDE and the challenge of T

I recently gave a SQL Supper talk as part of the Microsoft Future Decoded evening community events, and I made the point of not being impressed by Transparent Data Encryption (TDE), be it SQL Server, Azure SQL Database or Cosmos Db. I would like to explain why. The problem of TDE I have worked with data and storage engines for some time and therefore TDE seems straight-forward to me. I think a good overview of TDE for SQL Server, Azure SQL Database and Azure SQL Data Warehouse is given here , and I think a similarly good overview of TDE

SQL Supper Scripts

Hello! Thanks to everyone who turned up yesterday at SQL Supper: there was a good turnout of both new and familiar faces. The Demo Gods were with me and I was able to log on to both my Azure VM and able to deploy to SQL Azure. I’ve uploaded the scripts to gist and shared below. I also spoke about raising a Connect Issue so that Microsoft.Build.Utilities.Core NuGet package will work with Microsoft.Data.Tools.Msbuild. I’d like to see this so that we do not have to install the Microsoft Build Tools 2015 MSI on the box. And this is important because

Uploading Files To Data Lake Store With PowerShell Part One

Hello!   I’ve recently been working on uploading files to Azure Data Lake Store . It’s quite straightforward and I think a decent introduction into automating a deployment with Azure, as well as a good example of writing scripts that are idempotent, so I’m going to go through them from beginning to end. I’m going to go into one function per day, so this will take 5 days to cover. But I’m hoping that by focusing a bit more in-depth as opposed to trying to cram it all into one post it will be more informative, and both yourselves and

Uploading Files To Data Lake Storage With PowerShell Part Two

Carrying on from our previous post on automating the process of uploading files to  Azure Data Lake Store , we will check if a Resource Group exists, and if it does not then it will create it. Although the entire script is available on Git (posted below) I’m going to go into one function per post so that I can go in greater depth. Part One of this blog series focused on logging in to an Azure Subscription. Today’s function starts on line 42 and is called Set-AzureResourceGroup. Before we go into it though, I want to take a moment

In a partitioned world, don’t violate core directive

This is another short post steming from a recent talk I gave on Azure Cosmos Db vs. SQL Database, and there will be more based on discussion and feedback I received and things I learnt along the way. The point I want to make is that when implementing a scale out data storage then regardless of whether you are considerng Azure SQL Database, Cosmos Db or another storage engine, you have to think differently about your read and write patterns. To paraphrase Conor Cunningham linkedin | blog from his excellent OLTP Sharding Techniques for Massive Scale presetation at SQL PASS

CosmosDb and CRUD with a S

In preparation for my talk on Azure SQL Database vs. CosmosDb - How do you choose   at giving a talk at sqlbits I spent a lot of time thinking about create, read, update and delete (CRUD) operations, and concluded that with the advent of CosmosDb and its multi-model abstraction layer, CRUD needs a S. CRUD as an acronym has served software engineering very well by encapsulating the core actions of data persistence and retrieval. In the transition from the “ye olde” closed and monolithic systems to the current open, loosely coupled and distributed micro-service architectures via client server and

PowerShell Workflow Script To Stop VM’s In A Resource Group

Recently I needed to make sure that all the VM’s in a given resource group were stopped, and so I looked around the Runbooks available to download from the Azure Marketplace. Some of these were ridiculously complex: one was over 500 lines long! Just to stop a VM! Naturally there is a need to setup: we need to get the names of the VM’s in the Resource Group and, if they are running, then stop them. However the command to stop a VM is straightforward: “Stop-AzureRmVM”, followed by the name of  the VM and the resource group. Quite frankly I’m

CosmosDb, know your partition costs, well more or less

In my previous post Cosmos Db know your costs, and remember I made the point that by understanding RU costs early, you can make informed decisions in relation to document design and application CRUD and query operations. While it is easy and most certainly useful to arrive at a projected RU cost, using for example, the Request Units (RU) and Data Storage calculator or directly against a fixed 10GB collection via the Azure Portal (incidentally the same costs), the problem is these do not highlight RU costs when partitioning is required to support scale-out . Now if you know your

Using a Cloud Witness for Clusters

On a client site recently a question was asked about the file share witness in a SQL Server failover cluster on-premise, and where to put it if you only have 2 sites. As always, it depends! Let’s look at some scenarios. Bear in mind that use of the Azure Cloud Witness requires Windows Server 2016 or later. Topology 1 Three node cluster with 2 nodes at primary and 1 at disaster recovery (DR). Most people want high availability at their “primary site” and are happy to have standalone capability at the business continuity (DR) site. To save storage costs, I

SSDT 16.5 Released part 2: Using the DacFx API and Samples!

Hello! Yesterday I posted about the new release of SSDT from the SQL Tools Team at Microsoft. Two of the big changes are the ability to create the deployment report, deployment script and execute the deployment all in one command. The other change is that now for Azure two scripts are generated: one for any changes that need a connection to master, and the other script for changes to the user database. The samples yesterday showed how to execute the new method using SQLPackage, but a lot of people, myself included, have automated the deployment using the DacFx API through

CosmosDb, know your costs, and remember…

This will be a short post to emphasize a simple point, yet one that should make an enormous difference to how you approach configuring a CosmosDb collection and modelling documents to support read and write requirements. Know your costs I cannot emphasize this point enough. The folks at Microsoft have made this really easy, be it via the Request Units (RU) and Data Storage calculator , the collection Query Explorer through the Azure Portal or a REST client such as Postman coupled with the really useful library and samples by a Microsoftie over on git documentdb postman collection . Let’s

Why is SSDT Always Rebuilding My Constraints?

Hello! Let me begin by saying that I’m a big fan of SSDT. It’s free, it works with all flavours of Visual Studio, the team do their very best to keep up-to-date with features that are released with increasing regularity by Microsfot in both Azure and SQl Server on-prem. I’ve met the team a few times, and they’re genuinely keen to engage with the users of SSDT in how it can be better, and how it can be extensible. So, SSDT is a great tool. I said great, but not perfect. It has it's limitations, the same as any tool.

VSTS Hosted Build Agent Specs

I was interested to know just what the hardware specifications of the hosted build agent is. So I added some PowerShell to read out the info below: 2016-06-29T09:23:31.3935358Z systemname      Name                                      DeviceID NumberOfCores NumberOfLogicalProcessors Addresswidth 2016-06-29T09:23:31.3935358Z ----------      ----                                      -------- ------------- ------------------------- ------------ 2016-06-29T09:23:31.3935358Z TASKAGENT5-0010 Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz CPU0                 2                         2           64 2016-06-29T09:23:31.4095356Z Total memory:  7167.55078125 What piqued my interest greater was that this is the exact same spec for a D2 v2 box that is available via Azure. Clearly, Microsoft have a build agent template which is built, stored in a pool, and provisioned whenever a build

SSDT 16.5 Released

Hello! Recently the SQL Tools Team released a new version of both SQL Server Data Tools ( SSDT ) and SQL Server Management Studio (SSMS.)  There’s a range of bug fixes, but two new features that I am particualrly interested in. Firstly, SQLPAckage.exe and the DacFx API can now generate deployment report, deployment script and publish to a datbase all in one action. Neat! This is useful because it’s important to keep track of exactly what has changed on a database. Of course there’s nothing stopping you right now from creating executin these options in three separate actions, but there

VSTS Hosted Build Specs: The Script

Some months back, I published a post about the VSTS Hosted Build Agent’s specs. One thing I didn’t add was the PowerShell script that I used to get these details. Mainly because I couldn’t find the script anymore… So by popular demand here is the script I used to get the build specs. I ran it as an in-line PowerShell script as part of a build that was being run on the Hosted Build Agent.     Here is the output from the script:     The CPU has changed since I last gathered the data about this: previously it

Where To Find Us at SQLBits

Hello! SQLBits is back! This year SQLBits is being hosted in the Grand Hallway at the Olympia which was opened way back in 1886. This marks the return of Bits to London, and in fact the south of England, for the first time since 2015. Back then it was hosted at the Excel Exhibition Centre in the East of London. If you have never heard of SQLBits before, I’d be very surprised as it is the largest SQL Server Conference in Europe and offers world class training. But enough about the brief history lesson, let’s talk about what’s coming up

Assist Deploy Is Available on GitHub

Hello! For some time now I have been working on automating SSIS deployments, and earlier this week I published my efforts on GitHub . But before I get into the what/how, let’s focus on the why and let me catch you up on how I got here… The task to take an ispac and deploy in and of itself is quite a straightforward process as there are multiple ways to do this . For those of you who want the abridged version of the linked post, the choices are as follows: Integration Services Deploy Wizard SSIS Catalog T-SQL API PowerShell

SSDTPokedex: Migrating a Database Into SSDT

Hello!   If you want to have the best chance of something being successful, you have to be committed to it from the start. That’s a pretty fatuous sounding statement, almost as bad as “to make something better you have to do more of the good stuff and less of the bad stuff”, so let me contextualise: if you desire to have good testing coverage on an application, then you need to be serious about testing form the first day you write code for the application. Be it manual or automated testing, you need to put the effort in terms