Category Archives: Stories

Terraform Cloud Agents

Updated November 16, 2020: Terraform Cloud Agents now supports user-configured multipool!

Well hello there, readers, if any still remain. I’ve been gone a long time, but I’ve got some cool new stuff to show today – let’s talk about Terraform Cloud Agents.

First, some housekeeping. Did I mention that I’ve moved from VMware to HashiCorp? I didn’t, you say? Well, that seems important to note for context in this post. I joined the HashiCorp Terraform Cloud team in June of 2019 (time flies) and have been here ever since – loving every minute of it!

Now, on to the fun stuff. In mid-August, Terraform Cloud made its biggest announcement since we launched publicly in January – the Terraform Cloud Business tier. This new tier of service provides a whole host of additional business and enterprise focused features in our already awesome SaaS platform, and you can read all about it here if you like.

Of these new features, the one that I’ve spent the last 3 months working on is our new Terraform Cloud Agents capability. The goal of this project was to provide a way for our customers with significant investments in on-premises infrastructure (think vSphere, Nutanix, F5, OpenStack, etc) to take advantage of our SaaS management interface without having to expose all of those infrastructure components to the Internet.

This would be super useful for a whole host of scenarios – for example, a datacenter with a ton of bare metal or vSphere infrastructure that needs orchestration. Or, perhaps ROBO offices where periodic infrastructure management is needed – often enough to want to use an IaC approach through Terraform, but not often enough to justify a full installation of Terraform Enterprise. Or even just an isolated AWS VPC that doesn’t expose public access for data protection and security purposes. There are a ton of reasons why a network segment might be isolated, and Terraform Cloud Agents are meant to help ensure those segments may be lonely, but not forgotten.

So, how do you use a Terraform Cloud Agent? Well, let’s step through it. It’s crazy simple to do.

First, you’ll need to identify where you want to run the agent. We provide it both as a Docker container in the public registry or as a standalone signed binary from HashiCorp directly. You can use these to run the agent in a number of ways – on a bare VM, in base Docker, in Kubernetes, in HashiCorp Nomad… you get the idea! If you’d like to use K8s to run the agent, I suggest checking out this great Terraform module published by my coworker Phil that makes it super easy.

For this example, we’ll just run a Docker container in a local Docker machine for simplicity’s sake. We won’t go into all the process supervisor options here, though we do recommend pairing the agent with a supervisor for reliability purposes.

Before we get started running the agent, we need to do some setup in our Terraform Cloud for Business environment. We’ll assume you’ve got an account on Terraform Cloud already (if not, you can get one for free, though it won’t have access to these agent capabilities unless you have the paid Business tier active). Navigate to the Settings > Agents page, and click New agent pool.

An agent pool is a logical grouping of agents that work together to handle requests from your workspace(s). You can group your agents however you like, but one example could be to create a single agent pool for each on-premises data center you want to connect to. For example, DC1_NewYork and DC2_HongKong. We’ll get into how you divide work up in just a bit.

create agent pool

This will prompt you with the agent pool creation dialog, where you can enter an agent pool name. Do so, then click Continue.

set agent pool description

Once you’ve done this, you’ll be prompted to create a new agent pool token. These tokens are used for the agents within a pool to authenticate to Terraform Cloud, and can be used by any agent within a pool but are not shared across pools. Enter a token description and click Create token.

create a token description

Terraform Cloud will automatically generate a series of environment variables and commands you can use in your Docker environment as well. We’ll use these in a minute; keep this dialog visible on your browser and flip to a terminal environment. Remember that you never get to see this token again, so don’t close this or you’ll have to start over.

Now that you’re in your terminal, make sure you have Docker installed. I’m personally on a Mac using Homebrew, so in this case I need to ensure I have docker and docker-machine installed. We’ll start by running docker-machine start default to bring up a basic machine to execute containers.

We’ll follow that quickly with eval "$(docker-machine env default)" to ensure that my interactive shell can interact with that running machine in the background.

docker-machine start commands

Now, I’m going to use an env.list file to contain all my environment variables for this example. There are lots of different ways to handle environment variables for Docker, and you can pick whatever fits your architecture. You can export variables directly into the environment, use a file as I will here, or put them right on the command line (I don’t recommend this though, as your token will end up visible in process lists) if you like.

Here’s what env.list looks like on my machine. You can see I’ve exported the one required variable, TFC_AGENT_TOKEN and set it to the token copied from Terraform Cloud above. I’ve also set the optional TFC_AGENT_NAME variable so there will be a friendly name displayed in my agents list later on. The TFC_ADDRESS variable isn’t required, and this URL is actually the default anyway. And finally, the TFC_AGENT_LOG_LEVEL is set to DEBUG. This lets you see a bunch more information about the agent’s work, and is helpful but verbose. Once you’ve set at least your token and name, save and exit your editor.

env.list file contents

Here comes the fun part! Now we’ll run docker run --env-file env.list hashicorp/tfc-agent:latest and Docker will go do its magic. All the supporting slices will be pulled into your default running Docker machine, the agent will start up, self-check for upgrades, and register itself with Terraform Cloud.

Note: if you’re using Docker Desktop 2.4 or greater on a Mac, you may need to run the container with the -it flag, or it may not exit properly. This appears to be an issue in Docker, but we’re still trying to find a workaround for it.

agent running at command line

Now flip back to your browser. You can close the Create an agent pool dialog if you haven’t already by clicking Finish. You’ll now see the list of agent pools and their agents registered to your environment.

running agents within a pool in terraform cloud

Wow, is it really that easy? It sure is. Your new agent is now up and running, registered with your organization and waiting for work. You can see I have several agents listed here, some of which are in the Exited state. These are containers that I started up, used, and quit gracefully. They’ll automatically purge from the UI after a few hours of inactivity.

You’ll also see the note 1 out of 10 Purchased Agents in the table header. The number of purchased agents is determined by your Terraform Cloud Business subscription, and the number of agents that count against it depends on those agents status. Exited or otherwise offline agents don’t count against your total – only agents in the Busy, Idle or Unknown state will count against your entitlement here. Unknown agents may or may not come back online; if they do, they’ll return to an Idle state. If they don’t, they’ll expire out as permanently offline.

Optional: You can also create additional agent pools and register agents against them. You can see here that I’ve created a second pool for DC2_HongKong and there are agents waiting for work there as well.

multiple pools displayed within terraform cloud

A quick note on agent architecture: it’s designed to require no inbound Internet access to your environment. It continuously polls the Terraform Cloud service using outbound TCP/443 calls to ask for work items, then retrieves them if available. Super cool.

So now that the agent pool is created and the agents are registered and ready, all that’s left is to tell your Terraform Cloud workspaces to use them. You’ll want to click on Workspaces at the top of the screen, then select the name of the workspace you want to configure. From there, choose Settings > General and change your execution mode to Agent. Once this option is selected, you can also pick which Agent pool should be used for this workspace’s runs.

If your organization isn’t entitled to agents as a feature or if you haven’t configured any agents yet, you won’t be able to select this setting.

configuring a terraform cloud workspace to use the agent execution mode

Click Save Settings at the bottom of the screen (not pictured) and you’re all set to go!

You can now queue a plan in this workspace via whatever mechanism you choose; the Queue Plan button in the UI, a VCS file-based commit, etc. For this example, we’ll use the Queue Plan button. Click it, fill in a reason if you like, then click Queue plan again.

queueing a new plan

This will take you to the Run details view for this execution. You can see the plan has completed, Cost Estimation has run, any applicable Sentinel policies were evaluated, and the run is now just waiting for approval to actually execute.

You can also see exactly which agent pool the run executed in, along with the specific agent that ran it.

But perhaps even more interesting is what’s going on over in that terminal window. Here you can see that the agent has received the run, determined which version of Terraform is required to carry it out and downloaded the binary release to do so.

Side note: in this case the Terraform version is 0.12.7 which I just realized as I was taking these images, and definitely need to update – did you know 0.14 just released last month as well?

Once the Terraform binary is available, the agent will run a terraform init to download all required providers, etc, followed by the actually requested terraform plan – the logs and output of which are streamed back to Terraform Cloud for you to view in the friendly UI.

tfc-agent job details output

Now if you wanted to, you could go back over to Terraform Cloud, approve the run to actually apply, and the agent will finish the job for you! We won’t explore that here, since I’m sure y’all know what a Terraform apply looks like.

And that’s pretty much it! You’ve created an agent pool, an agent pool token, deployed an agent, configured your workspace to use the pool, and run a plan/apply cycle against it.

A few important notes:

On pool and token management: you can manage your agent pools and tokens at any time by clicking the name of your agent pool in the agents list view. Revoking a token will cause any agents using it to exit, since they’ll no longer be able to communicate with Terraform Cloud. You can create a new token and start new agents with it at any time. You can rename agent pools at any time, but you cannot delete an agent pool that still has workspaces configured to use it.

On agent flexibility: a really cool use case for the binary agent here is to roll your own execution environment, complete with whatever sideloaded additional tools you might need. Want to leverage the AWS or Google Cloud CLI in your Terraform configs? Include them on a VM alongside the Terraform Cloud Agent binary and you’re good to go!

To learn more about Terraform Cloud Agents, you can visit the documentation page at or contact HashiCorp to request more information.

And, of course, it wouldn’t be right to end without some kind of adult beverage – so today’s post is brought to you by Urban Roots Brewing‘s Floofster – an adorable German-style Hefeweizen with the name I can’t stop saying. Prost!

urban roots brewing's floofster

Cloud Assembly API Collection

Greetings, readers! First of all, let me apologize for being absent for so long. I’ve been so incredibly busy with my new role as a product manager that I haven’t been able to blog nearly as much as I would have liked.

Also, a lot of the cool stuff I’ve been working on has been highly secretive until recently – but now that our new flagship VMware Cloud Automation Services have been released, I’m free to start talking more about it publicly!

cloud assembly logoservice broker logocode stream logo

If you’re not already aware of it, VMware Cloud Automation Services (sometimes called CAS for short) is a SaaS-based Cloud Management platform, built from the ground up to enable multi-cloud consumption and pave over all those potholes you might hit as you drive the “Hybrid Highway.” If you’re interested, we are offering free 30-day trials at – and I hope you check it out!

One of the cool things I’ve been working on as a side project has been a collection of APIs that show not only the power and potential of this platform, but how rapidly you can go from swiping your credit card to fully automated, multi-cloud provisioning and management.

It’s really gratifying to be able to show how you can go from a blank, brand new Cloud Automation Services organization to one that is fully configured, connected to multiple clouds, contains a cloud-agnostic blueprint and has even provisioned and managed some resources – all in under 90 seconds.

To that end, I’ve published the API collection at for the interested to review, use, update and contribute to. Your comments and contributions are welcome! I’ll keep the description of the collection short and sweet here, since @CodyDeArkland and I will be maintaining and managing the collection directly on GitHub – but right now, it’s everything you need to go from zero to multi-cloud hero!

Check it out, and please leave any feedback either here or on GitHub.

This message has been brought to you by a delicious Single Malt Miyagikyo I recently had the opportunity to sample at the home of another outstanding community member, none other than the one and only Virtual Hobbit – smooth and easy drinking, and a most gentlemanly thing to have on hand for me as he doesn’t like whisky himself.

miyakikyo whisky


Reflections on a Career with VMware

2018 is a special year for me. It marks the 4-year anniversary of my career with VMware – the best company I’ve ever worked for, and arguably one of the best in the world. I was so pleased to come into my office a few weeks ago and find my beautiful 4-year VASA cubes waiting for me – as both someone who bleeds VMware green and blue and as a total contemporary art nerd, I’d really been looking forward to these.


And as I enter my 5th year with this incredible, visionary organization and look back on my time here, I’m pleased to announce that I’ve been honored with yet another new opportunity.

As of Monday, March 26th 2018, I will take on the new title of Product Line Manager for Cloud Automation – focusing on our Blueprinting and Infrastructure-as-Code strategies, among other things. This is an amazing opportunity to influence the strategy – and ultimately the deliverables themselves – for a solution that I’ve loved selling, then helping to actually engineer and design during my tenure.

But how did I get here? Not to bore you with my life’s story, but I think it’s a real testament to the incredible culture at VMware to tell the story, and I hope it will be both interesting and motivating.

My journey with VMware actually started in 1998, when I used their first-ever product to run a Slackware Linux VM in order to keep me safe(r) on IRC. I was tired of swapping hard drives every time I wanted to go online, and the IRC of 1998 was absolutely no place for a Windows user. My future was pretty much cemented at that point – there was no turning back. VMs were the future, and I wanted a piece of it. Over the following 15 years, I earned two degrees and several titles throughout the IT industry designing, implementing and managing VMware solutions all over the country.

In late 2013 – I got an unexpected ping from a recruiter. They’d seen my profile online, and wanted to talk. This would end up being the most important email I’d ever received, and a few weeks later I was going through the most intense multi-phasic interview process I had ever experienced. After an admittedly weird late-night meeting in a hotel lobby with the man who would be my first manager at VMware, the job was mine – a Senior Cloud Management Specialist for their Public Sector customers. My focus: something which was  at the time called vCloud Automation Center, a product I’d never even heard of before.

Over the coming weeks, months and years, I taught myself how to deploy, use, troubleshoot, and most importantly, sell this amazing piece of software. Because of my background automating manual processes and transforming businesses, I enjoyed great success in this role. The stories I could tell mirrored the problems my customers were having, and it was one of the greatest joys in my professional life to be able to say that I had such a complete solution for them, and to help them realize the potential therein.

I got involved with the Hands-On Labs (have you taken a lab today? Check them out for free at if not!) and through this program, made the invaluable connections to product management, engineering and marketing that I would eventually use as a springboard to become part of the product team itself.

In 2016, I joined the Cloud Management Business Unit’s engineering team as a Product Owner, which was a short lived title that eventually transitioned to Staff Functional Architect – a role responsible for looking at customer problems and ensuring that the product was accounting for real-world customer problems, in a clear and effective way. As the team evolved and grew, I was eventually made part of the Customer Success Engineering team, who focused on ensuring that VMware’s top, most strategic customers realized the true potential of their purchases and used them to the fullest. This was an incredibly gratifying role, combining many of the thing I loved most: working closely with our customers and learning new and interesting ways to apply our solutions.

But there was one thing lacking: I missed being part of the process. I really felt that I had something to contribute to our future direction, I wanted to get more involved with setting goals for the team, analyzing the market we’re part of, and defining our direction based on the trends and conditions in that market. That’s why this latest role is so exciting for me – I get to do all of that, and so much more. I’ll join a much smaller group of individuals who are all focused on ensuring that VMware continues to be a leader in Cloud Automation for years to come – and I couldn’t be more excited about the potential there.

I owe so much to so many people for helping me on this journey.

  • My original Specialist team, for taking a chance on someone who’d never sold anything in his life.
  • My customers, for buying stuff from someone who’d never sold anything in his life.
  • My very dear friends Kim Delgado, Jad El-Zein and Grant Orchard, who helped mentor me, teach me, and introduce me to the network of people who would make this possible.
  • The entire Hands-On Labs staff for giving me the opportunity to become comfortable and confident with a totally new piece of software.
  • The leadership of the Cloud Management Business Unit, where I’ve had the pleasure of working under 3 out of 5 of our VPs, for growing me into a whole new type of role.
  • And finally, the Cloud Automation PM team, for taking a chance on me. I hope I’ll prove worthy of the trust you’re placing in me.

And now, as a tribute to Grant (who brought me the bottle all the way from Oz,) I raise a glass to you all. Here’s to the next 30 years at VMware. May they be as full of adventure as the last 4.


Using the new Microsoft Azure Endpoint in vRealize Automation 7.2

After months of planning and development, vRealize Automation 7.2 finally went GA today, and it feels so good! One of the most anticipated and spotlight features of this new release was the Endpoint for Microsoft Azure. I had the privilege of working very closely with the team who delivered this capability, and thought I would take some time to develop a brief POC type guide to help get you started using the new Microsoft Azure Endpoint in vRealize Automation 7.2

This guide will walk you through configuring a brand-new Azure subscription to support a connection from vRealize Automation, then help you set up your vRA portal and finally design and deploy a simple Blueprint. We will assume that you have already set up your Azure subscription. If not, you can get a free trial at – and that you have a vRealize Automation 7.2 install all ready to go. Certain steps outlined in this guide make assumptions that your vRA configuration is rather basic and is not in production. Please use them at your own risk and consider any changes you make before you make them!

Part 1: Configuring Azure

Once you have your subscription created, log in to the Azure portal and click on the Key (Subscriptions) icon  in the left-hand toolbar. These icons can be re-ordered, so keep in mind that yours may be in a different spot than mine. Note down the Subscription ID (boxed in red above) – you will need this later!

Next, click on the Help icon near the upper right corner and select Show Diagnostics. This will bring up some raw data about your subscription – and here is the easiest place I’ve found to locate your Tenant ID. Simply search for “tenant” and select the field shown above. Note this ID for later as well.

Now you’ll need to create a few objects in the Azure portal to consume from vRA. One of the great capabilities the new endpoint brings is the ability to create new, on demand objects per request – but to make things a little cleaner we will create just a few ahead of time. We’ll start with a Storage Account and a Resource Group.

Locate the Storage Accounts icon in the sidebar – again, keeping in mind that these icons can be reordered and you may have to poke around a bit to find it. Make sure the correct Subscription is selected and click Add.

You’ll be prompted with a sliding panel (Azure does love sliding panels) where you can fill in some important details about your Storage Account. This is basically a location where your files, VHDs, tables, databases, etc will be stored. Enter a Name for the Storage Account – you’ll need to make sure to follow the rules here. Only lowercase letters, must be globally unique, etc. You can choose to change any of the options presented here, but for the purposes of this guide we will leave the defaults and move on to the Resource Group. This is a logical grouping for deployed workloads and their related devices/items – and to keep things clean, we will specify a new one now. Note the name of this Resource Group for later. You’ll also need to choose a Location for the workloads – pick whatever is convenient or geographically reasonable for you. I chose West US – make a note of this as well! Click Create.

Now, let’s create a simple Virtual Network. Locate the Virtual Network icon on the panel to the left and click it. Ensure the correct Subscription is selected and click Add.

Again, you’ll be prompted with some basic configuration. Enter a unique name for your new Virtual Network and record it for later. You can choose to modify the other options as necessary, but for this guide we will leave the defaults. It is important, however, that you select to Use Existing Resource Group and specify the group you created in the last step. You’ll also want to select the same Location as you did before. Azure will not deploy VMs (or other objects) if the Location doesn’t match logically between the various components that the object will consume. Click Create.

Now you need to set up an Azure Active Directory application so that vRA can authenticate. Locate the Active Directory icon on the left hand side and click it. Next, click App Registrations and select Add. The most astute readers will notice that there are certain parts of some of my screenshots deleted – sorry about that! Had to remove sensitive information.

Enter a Name for your AD App – it can be anything you like, as long as it complies with the name validation. Leave Web app/API as the Application Type. The Sign-on URL is not really important for the purposes of this configuration – you can enter really anything you want here. In this example, we are using a dummy vRA 7 URL. Click Create (not pictured above, but you should have the hang of it by now!)

SetupADAppSorry the above image is a little squashed. You can always click them for larger resolution!

Now you need to create a secret key to authenticate to the AD Application with. Click on the name of your new AD Application (in this case vRADevTest) at the left. Make sure you note down the Application ID for later. Then, select the All Settings button in the next pane. Choose Keys from the settings list.

Now, enter a Description for your new key and choose a Duration. Once you have entered those, click Save in the upper left of the blade – but note the warning! You will not ever get another chance to retrieve this value. Save the Key Value for later.

Now, look back to the left and select the Required Permissions option for the AD App. Click Add to create a new permission.

Click Select an API and choose the Windows Azure Service Management API, then click Select

Click the Select Permissions step at the left, then tick the box for Access Azure Service Management as organization users (preview) – then click Select. Once you do this, the Done button on the left will highlight. Click that as well.

There’s one final step in the Azure portal. Now that the AD Application has been created, you need to authorize it to connect to your Azure Subscription to deploy and manage VMs!

Click back on the Subscriptions icon (the Key) and select your new subscription. You may have to click on the text of the name to get the panel to slide over. Select the Access control (IAM) option to see the permissions to your subscription. Click Add at the top.

Click Select a Role and choose Contributor from the list

Click the Add Users option and search for the name of your new AD Application. When you see it in the list, tick the box and click Select, then OK in the first blade.

Repeat this process so that your new AD Application has the Owner, Contributor, and Reader roles. It should look like this when you’re done.

Part 2 – Azure CLI and Other Setup

To do the next steps, you will need the Azure CLI tools installed. These are freely available from Microsoft for both Windows and Mac. I won’t go into great detail on how to download and install a client application here – but you can get all the info you need at For the purposes of this guide, please remember that I use a Mac.

Once you have the Azure CLI installed, you will need to authenticate to your new subscription. Open a Terminal window and enter ‘azure login’. You will be given a URL and a shortcode to allow you to authenticate. Open the URL in your browser and follow these instructions to authenticate your subscription.

Enter your Auth Code and click Continue

Select and log in to your Azure account…


And if all went well, you now have a success message in both your browser and the CLI. Nice work!

If you have multiple subscriptions, as I do, you’ll need to ensure that the correct one is selected. You can do that with the ‘azure account set <subscription-name>’ command. Be sure to escape any spaces!

Before you go any further, you need to register the Microsoft.Compute provider to your new Azure subscription. This only needs to be done once, which means it’s easy to forget! The command is just ‘azure provider register microsoft.compute’ – and it has timed out the first time in 100% of my test cases. So I left that Big Scary Error in the screenshot for you – don’t worry, just run it a second time and it will complete.

Now, let’s use the Azure CLI to retrieve an example VM image name. These will be used in the vRA Blueprints to specify which type of VM you’d like to deploy. To do this, you’ll use the ‘azure vm image list’ command. In my example, the full command was ‘azure vm image list –location “West US” –publisher canonical –offer ubuntuserver –sku 16.04.0-LTS’  – this limits the list of displayed options to only those present in my West US location, published by Canonical, of type Ubuntu Server, containing the string 16.04.0-LTS in their name.

Choose one of these images and record the URN provided for it. As an example: canonical:ubuntuserver:16.04.0-LTS:16.04.201611150

So, to recap – you have set up your Azure subscription and should have the following list of items recorded:

  • Subscription ID
  • Tenant ID
  • Storage Account Name
  • Resource Group Name
  • Location
  • Virtual Network Name
  • Client Application ID
  • Client Application Secret Key
  • VM Image URN

Now, let’s move on to actually configuring vRA!

Part 3 – Configuring vRA

This section assumes that you have already deployed vRA with the default tenant, have created your basic users and permissions, and have at least one business group ready. This basic level of vRA setup is outside the scope of this guide.

Once you are logged in as an Infrastructure/IaaS administrator, proceed to the Administration tab and select vRO Configuration from the menu at the left (not pictured.) Then, choose Endpoints and select New to set up a new endpoint.

The Azure endpoint is not configured from the traditional Infrastructure tab location because it is not managed by the IaaS engine of vRA – it is presented via vRO and XaaS.

Select the Azure plug-in type and click Next

Enter a Name for your Endpoint and click Next again

Now the fun part! Remember all that info you copied down earlier? Time to use it! Fill in the Connection Settings with the details from the subscription configuration you did earlier. You won’t need to change the Azure Services URI or the Login URL, and the Proxy Host/Port are optional unless you know you need one.

Click Finish and the connection should be created!

Next, navigate to the Infrastructure tab and select Endpoints (not pictured,) followed by Fabric Groups. In this example I don’t yet have a Fabric Group, so I will create one by clicking New.

Remember a little while ago that I mentioned the Azure Endpoint is not managed by IaaS – so you won’t need to select any Compute Resources here. You just need to ensure that your user account is a Fabric Administrator to continue the rest of the configuration. If you already have this right, you may skip this step.

Now, refresh the vRA UI so that your new Fabric Administrator permissions take effect.

Once that’s done, navigate to the Infrastructure tab and the Reservations menu. Select the New button and choose a reservation of type Azure.

Fill in a Name and select a Business Group and Priority for the reservation, then click on the Resources tab

Enter your Subscription ID – be sure this is the same subscription ID that was specified in your Endpoint configuration. Requiring this field allows the mapping of many reservations to many endpoints/subscriptions.

Then, add the Resource Group and Storage Account which you created earlier. This is not required, but it does save some steps when creating the Blueprint later.

Click on the Network tab.

Enter the name of the Virtual Network you created earlier. Also note that you can set up Load Balancers and Security Groups here. Click OK to save the reservation.

Next, you’ll need a Machine Naming Prefix. Click on the <Infrastructure menu option (not pictured) and then select Administration (also not pictured) and finally Machine Prefixes. Enter a string, number of digits and next number that works for you – I used AzureDev-### starting with the number 0. Be sure to click the Green Check to save the prefix.

This prefix will be applied to any objects provisioned in a request – whether they are VMs, NICs, storage disks, etc. This helps the grouped objects to be easily located in an often busy Azure environment.

Now, click the Administration tab, followed by the Users and Groups menu (not pictured) and the Business Groups option. Select the business group that you plan to deploy with – in this example I have three to choose from and will be using Development.

Select your new Default Machine Prefix and click Finish.

Part 4 – Building a Blueprint

Now that the groundwork is laid, let’s build, entitle, and deploy a simple Azure blueprint!

Head over to the Design tab and make sure the Blueprints menu is open. It should be the default. Click New to begin designing a blueprint.

Give your blueprint a Name and click OK

Ensure the Machine Types category is selected and drag an Azure Machine to the canvas. Increase the Maximum Instances to 3 – this will make your Azure machine scalable! Click the Build Information tab to proceed.

Now you can begin filling out details about the machine itself. Select a Location – or one will be chosen for you from the reservation. You can also choose a Naming Prefix or allow the one you set up a moment ago to be the default. You can choose to select a Stock VM Image and paste the URN you retrieved from the Azure CLI, or you can specify a custom, user created one. Here you can also specify the Authentication options as well as the Instance Size configuration. If any of these options are left blank, they will be required at request time.

Note that when editing a field, you will see an editing dialog appear on the right of the blueprint form. This is to allow you additional flexibility in the configuration; please be sure to click ‘Apply‘ to save any changes. Also note that there are many helpful tooltips throughout the blueprint designer to help you along.

Click the Machine Resources tab to move on.

Here you can specify your Resource Group and Availability Set – and as before, you can fill in the one you created manually or allow vRA to create new ones for you. Remember to fill in the information on the right hand side and click Apply to save the values!

Click Storage to move to the next step.

The Storage tab allows you to specify details about your machine’s storage capabilities. You can specify the Storage Account here if you choose – or it can be inherited from the Reservation. If you explore this tab, you’ll see you can also create additional data disks as well as enable/disable the boot diagnostics functionality. For this example we will just create a simple OS disk configuration.

Now, click on the Network tab.

This is where you can configure advanced networking capabilities. In this example, you won’t fill anything in and we will instead allow the Azure reservation to apply the networking properties you specified earlier. Click Finish to save your blueprint.

Select your new blueprint and Publish it.

Now you must entitle your new blueprint. Because the steps to complete this operation can be highly dependent on the environment you’re doing it in, we will skip the details on how to create an entitlement and add this blueprint to it. Let’s move right ahead to provisioning the VM!

Part 5 – Deploying a Blueprint

I hope you’re glad you stuck with me this far! To recap, so far you have:

  • Created and configured your Azure subscription for vRA
  • Collected up a list of all the important pieces of data needed to provision to Azure
  • Configured vRA to deploy to Azure
  • Built your first Azure blueprint

There’s just one thing left to do….

Navigate to the Catalog tab, locate your new Azure blueprint and click Request.

Feel free to click around the request details – you’ll see that anything you specified in the blueprint itself is now a locked field. Other fields are still open and available for editing. You can create some seriously flexible requests by locking and unlocking only specific fields – the form is highly customizable.

When you’re done exploring, click Submit!

You can monitor the status of the request as you normally would, in the Requests tab.

After the provisioning completes, you’ll be able to see your new Azure VM in vRA…

…as well as in the Azure portal itself! You can see that the Naming Prefix was applied to both the VM and the vNIC that was created to support it.

This post was brought to you courtesy of Southern Tier Brewing’s Pumking – possibly the only good pumpkin beer ever. It hits all the natural squash and spice notes without ever feeling extracted, artificial, or overwhelming. And it gets bonus points for being from my home town. Yum!

I hope this guide has been helpful and that you’re as excited as I am about this great new addition to vRealize Automation’s repertoire. Please leave any feedback in the comments, and don’t forget to follow me on Twitter!

What’s in the bag?

Inspired by a post by Michael White at – I thought it would be interesting to share what’s in my laptop bag as well. And I bet those of you who know me have been curious before – my satchel is definitely my trademark.

As someone who travels pretty much endlessly, the items in this bag have been refined over many trips to be maximally useful with minimum bulk. Everything must have a purpose or there’s just no room for it.

First up is the bag itself. I see this leather beast as a long time friend and companion – it’s toured the world with me and I can’t imagine a journey without it. It’s a Saddleback Medium Original Briefcase, and yes – it’s heavy as hell. Weighs almost 8lbs empty. My one vice when it comes to the “pack light” mentality.


The bag is organized into two main compartments, with a few small pockets inside. It’s nowhere near as full as it looks; plenty of room for more if and when necessary. Usually I’ll wad up a sweatshirt or something in there too.


Inside the bag are the following items:

  • My trusty 13″ MacBook Pro Retina (+stickers, of course)
  • iPad Pro 9.7″ in a Waterfield Designs case
  • Cable and Dongle bag (more on that below)
  • Sunglasses
  • Bluetooth Headphones
  • VMware ID
  • Olight Smini Baton Ti flashlight
  • Fjallraven card case for business cards, status membership cards, vouchers, etc
  • Flowfold wallet
  • Bose QC20i headphones
  • Big Idea Design aluminum pen (with UniBall Jetstream ink, since I’m a smear-prone lefty)
  • Leather notebook cover with Field Notes notebook
  • Kindle Paperwhite


The iPad is a mini-computer in and of itself, thanks to the keyboard case, Pencil and VMware Horizon 🙂


And inside the cable and dongle bag are the following:

  • Zolt Charger and cable (charges my MacBook and has 2 additional USB charging ports – much lighter and smaller than an Apple charger and a bunch of additional USB chargers)
  • Bundle of lightning, micro-USB and Fitbit chargers
  • Spare wired headphones
  • Little fold-out USB hub from Palo Alto Networks (one of the most useful giveaways I’ve ever gotten – great for charging all those little devices at once)
  • 32GB and 16GB thumb drives
  • VMware vCloud Air battery pack
  • Apple Watch charging cable

That’s pretty much it! Thanks again to Michael (follow him @mwVme) for the fun idea.


And, of course – this post was brought to you by Saison Dupont’s Cuvee Dry Hopping series. A unique yearly spin on the classic Belgian Farmhouse that allows the brewer to experiment with dry hopping. This year’s version used Brewer’s Gold hops and came out delicious – sweet and bitter with that famous Saison Dupont brettanomyces tang.

vRA 7 – Editing Machine Blueprint Settings

So with the recent release of vRealize Automation 7, I have been showing it off at every chance I get. Whether it’s to customers or to our own internal employees – the response has been overwhelmingly positive!

One thing I have had more than one person ask me about, though, is whether or not you can edit the settings for a Machine Blueprint after it’s been deployed.

The answer is yes, but I understand why some folks may overlook where you can do this.

First, enter the Design tab and edit the Blueprint you wish to modify by clicking it.

Then, just look for the Gear icon next to the Blueprint name and click it:


This will bring you to the Blueprint Settings dialog, where you can modify the name, description, NSX settings, lease times, etc.


This post was brought to you by the St. Supery 2009 Dollarhide Petit Verdot. Dark, almost inky black and full of vanilla oak and bold tannins, its multi-layer complexity goes great with a Converged Blueprint.


Happy automating!

Reflecting on Hands-On Labs at VMworld 2015

Now that I’ve had a day or two to decompress after another action-packed VMworld, I thought it would be appropriate to just post a few thoughts about the experience.

I became involved with the Hands-On Labs shortly before VMworld 2014, making this my second cycle with the program. At the time, I had no idea how difficult or how rewarding the experience would be. As it turns out, participating in the Labs has been one of the single most personally and professionally satisfying undertakings of my life.

The development cycle began back in February of 2015, when a few of my fellow captains and I began developing what would be known as the “SDDC Base Pod” – a fully integrated single-site environment based on vSphere 6.0. This pod would contain all of the necessary components to showcase VMware’s Software-Defined Datacenter. Once extensive performance and integration testing had been completed, the pod was saved and made available to the rest of the individual lab development teams. This happened around May – and is when we really began creating our lab-specific content. All in all, each of us has contributed 500+ hours to the development, testing and delivery of this lab.

Working with Kim (@KCDAutomate), Shawn (@ShawnMKelly) and Grant (@GrantOrchard) with Burke (@TechnicalValues) as our leader, we laid down the additional software components, configuration, development and documentation to create the 8 amazing modules which comprised our 2015 lab. I’m pleased to be able to reveal the details of the lab now that VMworld has concluded:

HOL-SDC-1632 – vRealize Automation Advanced: Integration and Extensibility

A list of the modules is as follows:

  • Module 1 – You Need More Integration
  • Module 2 – An Introduction to Extensibility
  • Module 3 – Integrating vRealize Automation with the VMware Cloud Management Platform
  • Module 4 – Integrating vRealize Automation with Infoblox IPAM
  • Module 5 – Integrating vRealize Automation with Puppet Enterprise
  • Module 6 – Integrating vRealize Automation with NSX
  • Module 7 – XaaS Services with Advanced Service Designer and vRealize Orchestrator
  • Module 8 – Working with the vRealize Automation API

Each of the above were lovingly handcrafted by our team to show off not only the power and flexibility of the vRealize Automation engine, but also the amazing ways that it can be integrated into the other components of the VMware Cloud Management Platform as well as third party solutions that might already exist in your infrastructure.

But creating the labs are only the start. Delivering the content at both VMworld events and supporting it throughout the year is when the real work begins. The amazing Hands-On Lab staff works tirelessly to make sure that every attendee and lab user has a seamless, enlightening, engaging and enthralling experience. There are core staff, support staff, principals, captains, proctors and administrators. All of them play a role in making sure that the premier hands-on learning event in the industry can be a reality, and they all deserve huge thanks for their roles.

According to the surveys we received, our lab was a resounding success – as were the Expert-Led Workshops we hosted to teach our customers all about extensibility.

But, of course, events like this can’t be all work. We have plenty of fun too – and I’m very pleased to be able to call so many of these rockstars my friends, and want to thank some of them. Particularly:

  • Jad, Chris and Tina for wrangling all the staff and dealing with all the administrative work that’s so important with this many staff
  • Kim, Grant, Shawn and Burke for being the most amazing team I can think of. We’ve helped each other learn and grow so much in such a short time, and it’s been incredible
  • Doug, Bill and Dave for supporting us as we built, tested, reimagined, rebuilt, re-tested and rebuilt the environments time and time again
  • The rest of the principals, captains and proctors who helped create all the other content and made the lab room the bustling hive of expert conversation it was

That’s all for now – we’ll see some of you in a few weeks at VMworld in Barcelona – and keep an eye on the Hands-On Labs portal for this year’s content to be available to you at home!

Now if you’ll excuse me, my grill is hot and these rib-eyes are calling my name. Paired with a 2011 Miner Oakville Cabernet, I don’t think I can wait much longer.


Have you taken a VMware Hands-On Lab lately?

The title really sort of says it all!

For those of you who don’t know, the VMware Hands-On Labs program is a truly unique offering in the industry, allowing customers anywhere to test drive any of VMware’s products in live environments. From anywhere, at any time. For free.

We provide you with the environment, the infrastructure, and all the software – pre-installed and configured. You just bring your imagination and willingness to learn. You don’t have to be a paying customer or be tied to a VMware software account of any kind. Just head on over to and register.

Once there, you can choose from the catalog of more than 50 labs (with 40 new or updated ones to be released at VMworld 2015) spanning our entire portfolio. Whether you’re interested in learning what’s new in vSphere 6, how to deploy advanced vRealize Automation integrations, get some stick time with an EVO:RAIL or see how to start moving your business to vCloud Air,the Hands-On Labs provides a safe and free place to do it.

But (shameless plug alert!) the best part about the Labs are the guidance, manuals and use cases that have been prepared to go along with them. Each lab is carefully designed by customer facing subject matter experts like yours truly, so that you can be sure the use cases are relevant and represent real world questions or situations that our customers ask about daily. Small teams of dedicated VMware employees each take great pride in investing hundreds of hours every year to make sure you have the most seamless, robust, amazing experience possible.

If you’d like to see an example of my work,  HOL-SDC-1421 (Using vRealize Automation to Build and Deploy Services and Applications) is a 101-level vRealize Automation lab my team wrote last year. It’s available in the public catalog now.

At VMworld this year, my dream team and I will be pleased to release HOL-SDC-1632 (vRealize Automation Advanced: Integration and Extensibility) – our most advanced Automation lab ever. You won’t want to miss this one.

Big thanks go out to Burke Azbill (@TechnicalValues), Kim Delgado (@KCDAutomate), Shawn Kelly (@shawnmkelly), and Grant Orchard (@grantorchard) for making up 4/5 of the most collaborative, open-minded, hardest working HOL team in the whole company.

So head on over to the portal and register, follow @VMwareHOL on Twitter, or better yet – join us at VMworld 2015 and take a few labs in person with our expert staff!

Did I mention that the Labs are completely free? I think I might have.


A thank you to all my peers and customers

I’ve been with VMware for just about 18 months now. It’s been one of the most rewarding, challenging, utterly fantastic experiences of my life. We work hard – and we play almost as hard. I’ve taken great pride  in my work with my customers and with my peers throughout the company.

This past week, I received a call from my manager informing me that this work had been recognized and rewarded with a promotion from a Senior SE to a Staff SE. This is a real honor for me, and one that reminds me that while I may have come a long way, I still have a long way to go.

I’m also reminded that none of this would have been possible without all the great and honest feedback from my customers and the various teams throughout VMware that I work with every day. It’s with that that I send out a thank you to all my peers and customers for placing your trust in me. In return, you have my commitment that I will continue to provide the best possible service and support that I can!


Of course, it wouldn’t be a complete post without some kind of celebratory beverage. This photo was taken at a local establishment just a few minutes after I received the good news. Track Seven’s Panic IPA is a stellar brew, made with Amarillo and Simcoe hops front and center, rather than the more common Citra and Cascade varieties. The result is a high-hop flavor (70 IBU) without the face-shredding pucker factor. Lots of citrus and floral notes explode with every sip. Don’t let the can fool you, this is a top-shelf local craft beer. Check out Track Seven next time you visit me in Sacramento!

Cheers, and thanks again.

vRA Live! – Session 2 – Extensibility

Shameless plug here for an upcoming community event that @virtualjad over at will be hosting later this month – vRA Live! – Session 2 – Extensibility

The vRA Live sessions are meant to provide a live and real-time demonstration of the power of vRealize Automation, combined with an expert panel (including yours truly) who will host open discussion and Q&A while the magic happens. They are a lot of fun and incredibly informative.

Be sure to register in advance over at Jad’s blog ( – and we’ll see you there!