Reflections on a Career with VMware

2018 is a special year for me. It marks the 4-year anniversary of my career with VMware – the best company I’ve ever worked for, and arguably one of the best in the world. I was so pleased to come into my office a few weeks ago and find my beautiful 4-year VASA cubes waiting for me – as both someone who bleeds VMware green and blue and as a total contemporary art nerd, I’d really been looking forward to these.


And as I enter my 5th year with this incredible, visionary organization and look back on my time here, I’m pleased to announce that I’ve been honored with yet another new opportunity.

As of Monday, March 26th 2018, I will take on the new title of Product Line Manager for Cloud Automation – focusing on our Blueprinting and Infrastructure-as-Code strategies, among other things. This is an amazing opportunity to influence the strategy – and ultimately the deliverables themselves – for a solution that I’ve loved selling, then helping to actually engineer and design during my tenure.

But how did I get here? Not to bore you with my life’s story, but I think it’s a real testament to the incredible culture at VMware to tell the story, and I hope it will be both interesting and motivating.

My journey with VMware actually started in 1998, when I used their first-ever product to run a Slackware Linux VM in order to keep me safe(r) on IRC. I was tired of swapping hard drives every time I wanted to go online, and the IRC of 1998 was absolutely no place for a Windows user. My future was pretty much cemented at that point – there was no turning back. VMs were the future, and I wanted a piece of it. Over the following 15 years, I earned two degrees and several titles throughout the IT industry designing, implementing and managing VMware solutions all over the country.

In late 2013 – I got an unexpected ping from a recruiter. They’d seen my profile online, and wanted to talk. This would end up being the most important email I’d ever received, and a few weeks later I was going through the most intense multi-phasic interview process I had ever experienced. After an admittedly weird late-night meeting in a hotel lobby with the man who would be my first manager at VMware, the job was mine – a Senior Cloud Management Specialist for their Public Sector customers. My focus: something which was  at the time called vCloud Automation Center, a product I’d never even heard of before.

Over the coming weeks, months and years, I taught myself how to deploy, use, troubleshoot, and most importantly, sell this amazing piece of software. Because of my background automating manual processes and transforming businesses, I enjoyed great success in this role. The stories I could tell mirrored the problems my customers were having, and it was one of the greatest joys in my professional life to be able to say that I had such a complete solution for them, and to help them realize the potential therein.

I got involved with the Hands-On Labs (have you taken a lab today? Check them out for free at if not!) and through this program, made the invaluable connections to product management, engineering and marketing that I would eventually use as a springboard to become part of the product team itself.

In 2016, I joined the Cloud Management Business Unit’s engineering team as a Product Owner, which was a short lived title that eventually transitioned to Staff Functional Architect – a role responsible for looking at customer problems and ensuring that the product was accounting for real-world customer problems, in a clear and effective way. As the team evolved and grew, I was eventually made part of the Customer Success Engineering team, who focused on ensuring that VMware’s top, most strategic customers realized the true potential of their purchases and used them to the fullest. This was an incredibly gratifying role, combining many of the thing I loved most: working closely with our customers and learning new and interesting ways to apply our solutions.

But there was one thing lacking: I missed being part of the process. I really felt that I had something to contribute to our future direction, I wanted to get more involved with setting goals for the team, analyzing the market we’re part of, and defining our direction based on the trends and conditions in that market. That’s why this latest role is so exciting for me – I get to do all of that, and so much more. I’ll join a much smaller group of individuals who are all focused on ensuring that VMware continues to be a leader in Cloud Automation for years to come – and I couldn’t be more excited about the potential there.

I owe so much to so many people for helping me on this journey.

  • My original Specialist team, for taking a chance on someone who’d never sold anything in his life.
  • My customers, for buying stuff from someone who’d never sold anything in his life.
  • My very dear friends Kim Delgado, Jad El-Zein and Grant Orchard, who helped mentor me, teach me, and introduce me to the network of people who would make this possible.
  • The entire Hands-On Labs staff for giving me the opportunity to become comfortable and confident with a totally new piece of software.
  • The leadership of the Cloud Management Business Unit, where I’ve had the pleasure of working under 3 out of 5 of our VPs, for growing me into a whole new type of role.
  • And finally, the Cloud Automation PM team, for taking a chance on me. I hope I’ll prove worthy of the trust you’re placing in me.

And now, as a tribute to Grant (who brought me the bottle all the way from Oz,) I raise a glass to you all. Here’s to the next 30 years at VMware. May they be as full of adventure as the last 4.


vRealize Automation 7.3 REST API Documentation

With a lion’s roar, a new version of vRA went GA on May 16th – complete with dozens of new features, hundreds of bugfixes, and a heaping helping of love and care. If you’d like more info on anything but that last part, please see the release notes here. But one thing that may be overlooked are the significant improvements that have been made in the vRealize Automation 7.3 REST API Documentation.

As of the time of writing, we  have made several samples available (in the form of Postman collections) containing REST API calls for our most common vRA use cases.  These samples are hosted on GitHub  at

For more detailed information on these samples, please see this blog post by our very own Sudershan Bhandari on what he was trying to accomplish with this collection and how you can use it to accelerate your use of the vRA APIs.

Some examples of the API samples provided include:

  • Create and entitle a composition blueprint
  • Create and entitle a parameterized blueprint (using the all-new component profiles)
  • Export/Import blueprints and other content/components
  • Perform various day 2 operations on catalog resource including reconfigure, Scale-In/Scale-Out and others
  • Manage endpoint configuration
  • Create approval policy and approve or reject an approval request
  • Create reservations of various types
  • Create and manage a tenant, including creating authentication directories for the tenant
  • Manage users and their roles
  • Configure a NSX provisioning setup including endpoint, reservation, network profiles and sample blueprints
  • Create property definitions and retrieve values backed by vRO script actions
  • Create and manage reclamation requests
  • Register event topics and subscribe/delete subscription to event topics
  • API tips on bearer token management, pagination, sorting, filtering

We have also entirely revamped our API documentation reference on VMware{code}, so it now shows the APIs per service, an overview of each service, the API listing and relevant sample code snippets all in a very organized and easily searchable manner. Check that out at

Our API programming guides have also been completely reworked for ease of use and friendlier navigation – to get you started faster and support  you more easily.

So, while there are tons of amazing new capabilities in our new flagship release of vRealize Automation, I hope you won’t overlook the huge investment we’ve made in this vital area. Check it out today!

As always, this post was brought to you by Tropikalia IPA by White Stork Brewing Company. It’s pretty much my go-to while I’m working with our amazing vRA engineers in Sofia.


Using the new Microsoft Azure Endpoint in vRealize Automation 7.2

After months of planning and development, vRealize Automation 7.2 finally went GA today, and it feels so good! One of the most anticipated and spotlight features of this new release was the Endpoint for Microsoft Azure. I had the privilege of working very closely with the team who delivered this capability, and thought I would take some time to develop a brief POC type guide to help get you started using the new Microsoft Azure Endpoint in vRealize Automation 7.2

This guide will walk you through configuring a brand-new Azure subscription to support a connection from vRealize Automation, then help you set up your vRA portal and finally design and deploy a simple Blueprint. We will assume that you have already set up your Azure subscription. If not, you can get a free trial at – and that you have a vRealize Automation 7.2 install all ready to go. Certain steps outlined in this guide make assumptions that your vRA configuration is rather basic and is not in production. Please use them at your own risk and consider any changes you make before you make them!

Part 1: Configuring Azure

Once you have your subscription created, log in to the Azure portal and click on the Key (Subscriptions) icon  in the left-hand toolbar. These icons can be re-ordered, so keep in mind that yours may be in a different spot than mine. Note down the Subscription ID (boxed in red above) – you will need this later!

Next, click on the Help icon near the upper right corner and select Show Diagnostics. This will bring up some raw data about your subscription – and here is the easiest place I’ve found to locate your Tenant ID. Simply search for “tenant” and select the field shown above. Note this ID for later as well.

Now you’ll need to create a few objects in the Azure portal to consume from vRA. One of the great capabilities the new endpoint brings is the ability to create new, on demand objects per request – but to make things a little cleaner we will create just a few ahead of time. We’ll start with a Storage Account and a Resource Group.

Locate the Storage Accounts icon in the sidebar – again, keeping in mind that these icons can be reordered and you may have to poke around a bit to find it. Make sure the correct Subscription is selected and click Add.

You’ll be prompted with a sliding panel (Azure does love sliding panels) where you can fill in some important details about your Storage Account. This is basically a location where your files, VHDs, tables, databases, etc will be stored. Enter a Name for the Storage Account – you’ll need to make sure to follow the rules here. Only lowercase letters, must be globally unique, etc. You can choose to change any of the options presented here, but for the purposes of this guide we will leave the defaults and move on to the Resource Group. This is a logical grouping for deployed workloads and their related devices/items – and to keep things clean, we will specify a new one now. Note the name of this Resource Group for later. You’ll also need to choose a Location for the workloads – pick whatever is convenient or geographically reasonable for you. I chose West US – make a note of this as well! Click Create.

Now, let’s create a simple Virtual Network. Locate the Virtual Network icon on the panel to the left and click it. Ensure the correct Subscription is selected and click Add.

Again, you’ll be prompted with some basic configuration. Enter a unique name for your new Virtual Network and record it for later. You can choose to modify the other options as necessary, but for this guide we will leave the defaults. It is important, however, that you select to Use Existing Resource Group and specify the group you created in the last step. You’ll also want to select the same Location as you did before. Azure will not deploy VMs (or other objects) if the Location doesn’t match logically between the various components that the object will consume. Click Create.

Now you need to set up an Azure Active Directory application so that vRA can authenticate. Locate the Active Directory icon on the left hand side and click it. Next, click App Registrations and select Add. The most astute readers will notice that there are certain parts of some of my screenshots deleted – sorry about that! Had to remove sensitive information.

Enter a Name for your AD App – it can be anything you like, as long as it complies with the name validation. Leave Web app/API as the Application Type. The Sign-on URL is not really important for the purposes of this configuration – you can enter really anything you want here. In this example, we are using a dummy vRA 7 URL. Click Create (not pictured above, but you should have the hang of it by now!)

SetupADAppSorry the above image is a little squashed. You can always click them for larger resolution!

Now you need to create a secret key to authenticate to the AD Application with. Click on the name of your new AD Application (in this case vRADevTest) at the left. Make sure you note down the Application ID for later. Then, select the All Settings button in the next pane. Choose Keys from the settings list.

Now, enter a Description for your new key and choose a Duration. Once you have entered those, click Save in the upper left of the blade – but note the warning! You will not ever get another chance to retrieve this value. Save the Key Value for later.

Now, look back to the left and select the Required Permissions option for the AD App. Click Add to create a new permission.

Click Select an API and choose the Windows Azure Service Management API, then click Select

Click the Select Permissions step at the left, then tick the box for Access Azure Service Management as organization users (preview) – then click Select. Once you do this, the Done button on the left will highlight. Click that as well.

There’s one final step in the Azure portal. Now that the AD Application has been created, you need to authorize it to connect to your Azure Subscription to deploy and manage VMs!

Click back on the Subscriptions icon (the Key) and select your new subscription. You may have to click on the text of the name to get the panel to slide over. Select the Access control (IAM) option to see the permissions to your subscription. Click Add at the top.

Click Select a Role and choose Contributor from the list

Click the Add Users option and search for the name of your new AD Application. When you see it in the list, tick the box and click Select, then OK in the first blade.

Repeat this process so that your new AD Application has the Owner, Contributor, and Reader roles. It should look like this when you’re done.

Part 2 – Azure CLI and Other Setup

To do the next steps, you will need the Azure CLI tools installed. These are freely available from Microsoft for both Windows and Mac. I won’t go into great detail on how to download and install a client application here – but you can get all the info you need at For the purposes of this guide, please remember that I use a Mac.

Once you have the Azure CLI installed, you will need to authenticate to your new subscription. Open a Terminal window and enter ‘azure login’. You will be given a URL and a shortcode to allow you to authenticate. Open the URL in your browser and follow these instructions to authenticate your subscription.

Enter your Auth Code and click Continue

Select and log in to your Azure account…


And if all went well, you now have a success message in both your browser and the CLI. Nice work!

If you have multiple subscriptions, as I do, you’ll need to ensure that the correct one is selected. You can do that with the ‘azure account set <subscription-name>’ command. Be sure to escape any spaces!

Before you go any further, you need to register the Microsoft.Compute provider to your new Azure subscription. This only needs to be done once, which means it’s easy to forget! The command is just ‘azure provider register microsoft.compute’ – and it has timed out the first time in 100% of my test cases. So I left that Big Scary Error in the screenshot for you – don’t worry, just run it a second time and it will complete.

Now, let’s use the Azure CLI to retrieve an example VM image name. These will be used in the vRA Blueprints to specify which type of VM you’d like to deploy. To do this, you’ll use the ‘azure vm image list’ command. In my example, the full command was ‘azure vm image list –location “West US” –publisher canonical –offer ubuntuserver –sku 16.04.0-LTS’  – this limits the list of displayed options to only those present in my West US location, published by Canonical, of type Ubuntu Server, containing the string 16.04.0-LTS in their name.

Choose one of these images and record the URN provided for it. As an example: canonical:ubuntuserver:16.04.0-LTS:16.04.201611150

So, to recap – you have set up your Azure subscription and should have the following list of items recorded:

  • Subscription ID
  • Tenant ID
  • Storage Account Name
  • Resource Group Name
  • Location
  • Virtual Network Name
  • Client Application ID
  • Client Application Secret Key
  • VM Image URN

Now, let’s move on to actually configuring vRA!

Part 3 – Configuring vRA

This section assumes that you have already deployed vRA with the default tenant, have created your basic users and permissions, and have at least one business group ready. This basic level of vRA setup is outside the scope of this guide.

Once you are logged in as an Infrastructure/IaaS administrator, proceed to the Administration tab and select vRO Configuration from the menu at the left (not pictured.) Then, choose Endpoints and select New to set up a new endpoint.

The Azure endpoint is not configured from the traditional Infrastructure tab location because it is not managed by the IaaS engine of vRA – it is presented via vRO and XaaS.

Select the Azure plug-in type and click Next

Enter a Name for your Endpoint and click Next again

Now the fun part! Remember all that info you copied down earlier? Time to use it! Fill in the Connection Settings with the details from the subscription configuration you did earlier. You won’t need to change the Azure Services URI or the Login URL, and the Proxy Host/Port are optional unless you know you need one.

Click Finish and the connection should be created!

Next, navigate to the Infrastructure tab and select Endpoints (not pictured,) followed by Fabric Groups. In this example I don’t yet have a Fabric Group, so I will create one by clicking New.

Remember a little while ago that I mentioned the Azure Endpoint is not managed by IaaS – so you won’t need to select any Compute Resources here. You just need to ensure that your user account is a Fabric Administrator to continue the rest of the configuration. If you already have this right, you may skip this step.

Now, refresh the vRA UI so that your new Fabric Administrator permissions take effect.

Once that’s done, navigate to the Infrastructure tab and the Reservations menu. Select the New button and choose a reservation of type Azure.

Fill in a Name and select a Business Group and Priority for the reservation, then click on the Resources tab

Enter your Subscription ID – be sure this is the same subscription ID that was specified in your Endpoint configuration. Requiring this field allows the mapping of many reservations to many endpoints/subscriptions.

Then, add the Resource Group and Storage Account which you created earlier. This is not required, but it does save some steps when creating the Blueprint later.

Click on the Network tab.

Enter the name of the Virtual Network you created earlier. Also note that you can set up Load Balancers and Security Groups here. Click OK to save the reservation.

Next, you’ll need a Machine Naming Prefix. Click on the <Infrastructure menu option (not pictured) and then select Administration (also not pictured) and finally Machine Prefixes. Enter a string, number of digits and next number that works for you – I used AzureDev-### starting with the number 0. Be sure to click the Green Check to save the prefix.

This prefix will be applied to any objects provisioned in a request – whether they are VMs, NICs, storage disks, etc. This helps the grouped objects to be easily located in an often busy Azure environment.

Now, click the Administration tab, followed by the Users and Groups menu (not pictured) and the Business Groups option. Select the business group that you plan to deploy with – in this example I have three to choose from and will be using Development.

Select your new Default Machine Prefix and click Finish.

Part 4 – Building a Blueprint

Now that the groundwork is laid, let’s build, entitle, and deploy a simple Azure blueprint!

Head over to the Design tab and make sure the Blueprints menu is open. It should be the default. Click New to begin designing a blueprint.

Give your blueprint a Name and click OK

Ensure the Machine Types category is selected and drag an Azure Machine to the canvas. Increase the Maximum Instances to 3 – this will make your Azure machine scalable! Click the Build Information tab to proceed.

Now you can begin filling out details about the machine itself. Select a Location – or one will be chosen for you from the reservation. You can also choose a Naming Prefix or allow the one you set up a moment ago to be the default. You can choose to select a Stock VM Image and paste the URN you retrieved from the Azure CLI, or you can specify a custom, user created one. Here you can also specify the Authentication options as well as the Instance Size configuration. If any of these options are left blank, they will be required at request time.

Note that when editing a field, you will see an editing dialog appear on the right of the blueprint form. This is to allow you additional flexibility in the configuration; please be sure to click ‘Apply‘ to save any changes. Also note that there are many helpful tooltips throughout the blueprint designer to help you along.

Click the Machine Resources tab to move on.

Here you can specify your Resource Group and Availability Set – and as before, you can fill in the one you created manually or allow vRA to create new ones for you. Remember to fill in the information on the right hand side and click Apply to save the values!

Click Storage to move to the next step.

The Storage tab allows you to specify details about your machine’s storage capabilities. You can specify the Storage Account here if you choose – or it can be inherited from the Reservation. If you explore this tab, you’ll see you can also create additional data disks as well as enable/disable the boot diagnostics functionality. For this example we will just create a simple OS disk configuration.

Now, click on the Network tab.

This is where you can configure advanced networking capabilities. In this example, you won’t fill anything in and we will instead allow the Azure reservation to apply the networking properties you specified earlier. Click Finish to save your blueprint.

Select your new blueprint and Publish it.

Now you must entitle your new blueprint. Because the steps to complete this operation can be highly dependent on the environment you’re doing it in, we will skip the details on how to create an entitlement and add this blueprint to it. Let’s move right ahead to provisioning the VM!

Part 5 – Deploying a Blueprint

I hope you’re glad you stuck with me this far! To recap, so far you have:

  • Created and configured your Azure subscription for vRA
  • Collected up a list of all the important pieces of data needed to provision to Azure
  • Configured vRA to deploy to Azure
  • Built your first Azure blueprint

There’s just one thing left to do….

Navigate to the Catalog tab, locate your new Azure blueprint and click Request.

Feel free to click around the request details – you’ll see that anything you specified in the blueprint itself is now a locked field. Other fields are still open and available for editing. You can create some seriously flexible requests by locking and unlocking only specific fields – the form is highly customizable.

When you’re done exploring, click Submit!

You can monitor the status of the request as you normally would, in the Requests tab.

After the provisioning completes, you’ll be able to see your new Azure VM in vRA…

…as well as in the Azure portal itself! You can see that the Naming Prefix was applied to both the VM and the vNIC that was created to support it.

This post was brought to you courtesy of Southern Tier Brewing’s Pumking – possibly the only good pumpkin beer ever. It hits all the natural squash and spice notes without ever feeling extracted, artificial, or overwhelming. And it gets bonus points for being from my home town. Yum!

I hope this guide has been helpful and that you’re as excited as I am about this great new addition to vRealize Automation’s repertoire. Please leave any feedback in the comments, and don’t forget to follow me on Twitter!

What’s in the bag?

Inspired by a post by Michael White at – I thought it would be interesting to share what’s in my laptop bag as well. And I bet those of you who know me have been curious before – my satchel is definitely my trademark.

As someone who travels pretty much endlessly, the items in this bag have been refined over many trips to be maximally useful with minimum bulk. Everything must have a purpose or there’s just no room for it.

First up is the bag itself. I see this leather beast as a long time friend and companion – it’s toured the world with me and I can’t imagine a journey without it. It’s a Saddleback Medium Original Briefcase, and yes – it’s heavy as hell. Weighs almost 8lbs empty. My one vice when it comes to the “pack light” mentality.


The bag is organized into two main compartments, with a few small pockets inside. It’s nowhere near as full as it looks; plenty of room for more if and when necessary. Usually I’ll wad up a sweatshirt or something in there too.


Inside the bag are the following items:

  • My trusty 13″ MacBook Pro Retina (+stickers, of course)
  • iPad Pro 9.7″ in a Waterfield Designs case
  • Cable and Dongle bag (more on that below)
  • Sunglasses
  • Bluetooth Headphones
  • VMware ID
  • Olight Smini Baton Ti flashlight
  • Fjallraven card case for business cards, status membership cards, vouchers, etc
  • Flowfold wallet
  • Bose QC20i headphones
  • Big Idea Design aluminum pen (with UniBall Jetstream ink, since I’m a smear-prone lefty)
  • Leather notebook cover with Field Notes notebook
  • Kindle Paperwhite


The iPad is a mini-computer in and of itself, thanks to the keyboard case, Pencil and VMware Horizon 🙂


And inside the cable and dongle bag are the following:

  • Zolt Charger and cable (charges my MacBook and has 2 additional USB charging ports – much lighter and smaller than an Apple charger and a bunch of additional USB chargers)
  • Bundle of lightning, micro-USB and Fitbit chargers
  • Spare wired headphones
  • Little fold-out USB hub from Palo Alto Networks (one of the most useful giveaways I’ve ever gotten – great for charging all those little devices at once)
  • 32GB and 16GB thumb drives
  • VMware vCloud Air battery pack
  • Apple Watch charging cable

That’s pretty much it! Thanks again to Michael (follow him @mwVme) for the fun idea.


And, of course – this post was brought to you by Saison Dupont’s Cuvee Dry Hopping series. A unique yearly spin on the classic Belgian Farmhouse that allows the brewer to experiment with dry hopping. This year’s version used Brewer’s Gold hops and came out delicious – sweet and bitter with that famous Saison Dupont brettanomyces tang.

vRealize Automation 7 Management Pack for vRealize Operations

If you’re an SDDC administrator, you probably already know about the power and operational visibility that vRealize Operations brings to your environment. With the newly-released vRealize Automation 7 Management Pack for vRealize Operations, that operational visibility can be extended to be tenant-aware and help monitor your vRA environment in a whole new way.

This new Management Pack gives you comprehensive visibility into both performance and capacity metrics of a vRA tenant’s business groups and underlying cloud infrastructure. By combining these new metrics with the custom dashboarding capabilities of vRealize Operations, you gain an unprecedented level of flexibility and insight when monitoring these complex environments.

The purpose of this post is to walk you through the implementation of this new Management Pack – so, let’s get right to it.

You can download the Management Pack from the VMware Solution Exchange here.

Part 1: Enabling vROps as your Metrics Provider

First, let’s review what you’ll see before you integrate vRA and vROps. Looking at the details of any deployed item, you can see the highlighted white space – space that can definitely be put to more productive use.


Assuming you’re logged in as a vRA Tenant Administrator, click on the Administration tab, then the Reclamation button in the menu at the left. Select Metrics Provider and you’ll see the configuration panel for the vROps endpoint. Fill in the appropriate details for your vROps instance and click Test Connection. Once it succeeds, click Save.


You will probably be prompted to accept the SSL certificate offered up by your vROps instance. Click OK to accept the certificate, provided you trust it!


Now, if you click on the Tenant Machines option to the left, you’ll be presented with a list of all of your provisioned machines. You can see that now there’s a Health status badge for each machine. In my case, the Health is reporting an “Immediate” (orange) status for many of my virtual machines, due to very heavy utilization in my lab. You can also see the average CPU, Memory and Network consumption for each machine – data pulled directly from vROps. This consumption data can be used directly from within this view to initiate reclamation requests. For example, if a VM was identified here as idle, the VM owner could be notified and the resources recovered.


Click back to the Items tab and view the same object you looked at earlier. You will see that the white space now contains a vROps-driven Health badge, with information about any possible issues. When you’re ready, log out of your vRA instance.


Part 2: Configuring vRealize Automation

You’ll need to log in as  the default administrator for this next step – administrator@vsphere.local


Click on the Administration tab, followed by the Tenants menu button at the left. Locate the Tenant that you plan to link vROps to and Edit it. In this example, I am modifying the vsphere.local Tenant.


Now, select the Local Users tab. Click the +New button to add a new user and fill in the requested details. In this case, my new username is “vropsmp” – and since we are creating this local user in the vsphere.local tenant, the full account is “vropsmp@vsphere.local“. Click OK and then Next.


This will place you on the Administrators tab. Using the Search boxes, find and add your new local account to both the Tenant Administrators and the IaaS Administrators role. Click Finish when you’re done, and then Log Out of vRA.


Now you’ll need to log back in as your normal vRA Tenant Administrator to finish the configuration.


Click the Infrastructure tab, then Endpoints from the menu on the left. Select Fabric Groups from the sub-menu and then click to edit your Fabric Group. In this example, the Fabric Group is named Dev Cluster.


Search for and add your new local user to the list of Fabric Administrators. Remember, in this example the user is named vropsmp@vsphere.local. Click OK to save the Fabric Group.


Now, click on the Administration tab, followed by Users & Groups from  the menu on the left. Select the Directory Users and Groups sub-menu and search for your new local user. Click the user’s name to edit it.


In the list to the right titled “Add roles to this User“, scroll down until you find the Software Architect role. Select it and then click Finish to save the account.


Part 3: Configuring vRealize Operations

Once you’ve downloaded the new Management Pack (again, found here) you’ll need to import it into vROps and configure it to retrieve data from vRA.

Log in to vROps with an administrative user account.


Click on the Administration tab, and ensure Solutions is selected. Click the + symbol to import a new Solution.


Click the Browse button to select the downloaded Management Pack, then click Upload.

NOTE! If you already had the earlier vROps Management Pack for vRA installed, you may have to do a “force install” by selecting the first checkbox. This is because the version number scheme was changed, and vROps recognizes the NEW MP as being an OLDER version. This is normal, if a bit cumbersome.

Click Next when the upload is verified and you are ready to proceed.


Accept the EULA (after reading it carefully first, of course) and click Next again.


The installation will run for a while. When it shows “Completed”, click Finish.


Locate the new Management Pack in the list of Solutions and highlight it. Click the Configure icon (gears) to bring up the configuration dialog. Fill in a Display Name and Description as well as your vRA URL and the name of the Tenant you want to connect to. In this example, the Tenant is vsphere.local. Click the + sign to start setting up credentials next.


Fill in the credential details as shown – your SysAdmin should be the administrator@vsphere.local administrative account, and your SuperUser will be the local user you created at the beginning of these steps. In this example, that local user is vropsmp@vsphere.local. Click OK when you’re done.


Click the ‘Test Connection‘ button. You’ll be prompted with two SSL certificate dialogs – accept them both, if you trust the certificates. You see two because the Management Pack is communicating with both your core vRA appliance as well as your IaaS server(s).



If you’ve set everything up properly, you’ll see a message like this one. Click OK.


Click on Save Settings to save your adapter configuration. You’ll be prompted with a “Save Successful” dialog – click OK here as well – then click Close.


If everything’s gone according to plan, you should now see that your Management Pack is configured and receiving data from your vRA instance.


Part 4: Reviewing Dashboards

Now that all of the configuration is complete, you’re ready to start consuming the rich data exposed by your new integration. Click on the Home tab in vROps, followed by the drop-down arrow for the Dashboard List. Hover over the vRealize Automation sub-menu to see the 4 available default  dashboards.


The vRealize Automation Overview dashboard shows information about the entire vRA instance – including component health and a whole host of metrics about each individual component of the instance. This is useful for troubleshooting and analyzing performance across your entire implementation of the vRA stack.


The vR Automation Tenant Overview dashboard provides exactly that – an overview of the various risk and health metrics pertinent to each configured vRA Tenant.


The vR Automation Cloud Infrastructure Monitoring dashboard allows you to see what impact infrastructure issues are having on tenant virtual machines, and what outstanding alerts may be present for those machines and infrastructure.


Finally, the vR Automation Top-N Dashboard provides highlight Top-N metrics, such as the most popular Blueprints, most wasteful Tenants, the Business Group with the most alerts, etc.


And, of course, all of the objects which are exposed by the Management Pack can be viewed in the vRealize Automation Environment view. These objects can all be referenced by Super Metrics, or custom dashboards, or scheduled reports – but those are all beyond the scope of this guide.


That just about wraps it up – except, of course, for the most important part…

This post was brought to you by New Helvetia Brewing Company’s Mystery Airship 2.0 Imperial Chocolate Porter, brewed with Ginger Elizabeth’s Oaxacan Spicy Chocolate. This is quite possibly the single greatest beer I have ever tried – the darkness of the porter is supplemented by the brightness of the ginger and creamy feel of the chocolate. The flavors dance on your palate and then vanish in a fog of lingering, dark spice. I honestly think I found my desert island beer!


Happy Automating!

vIDM Attribute Mapping in vRA 7

It seems like the more time I spend with the new VMware Identity Manager (vIDM) in vRealize Automation 7, the more great new capabilities I discover. Today’s post comes directly from a customer request, and discusses how to use vIDM Attribute Mapping in vRA 7.

Due to complexities in this customer’s Active Directory environment, they have the “email” attribute in their user accounts populated – but it does not contain the user’s actual email address. This means that vRA is unable to send them notifications, as it automatically inherits this field and uses the information therein.

Have no fear, vIDM is here.

Here you can see my account with the default configuration. My email address is set to jon@corp.local, but that just isn’t where I receive my email.


Looking at my account inside of Active Directory, we can see that this address is set in the ‘E-mail’ field, which maps to the ‘mail‘ attribute in LDAP.


But, if we look at the Attribute Editor, we can see that the LDAP ‘otherMailbox‘ attribute contains my preferred email address of ‘


So, how can I change my vRA configuration to utilize that otherMailbox attribute instead? It’s very easy. Start by clicking on the Administration tab in vRA. Then select the Directories button on the left hand side and edit your Active Directory shown to the right.


Next, you’ll be presented with the Active Directory settings page. Click on the Sync Settings button.


Here you’ll see a whole host of advanced synchronization options that you can change. Click on the Mapped Attributes button at the top, then select the dropdown next to email. Select Enter Custom Input… from the menu.


Now, enter the new Active Directory attribute name that you want to retrieve the email address from. In this example, the new attribute is named otherMailbox. Click Save & Sync to save your settings and update the user accounts.


You’ll now be given the opportunity to review the proposed changes once the sync is completed. You can see in this example, there are 4 AD accounts that will have their attribute mappings updated. Click Sync Directory.


Once the sync is completed (this may take some time, depending on how many objects were being updated and the size of your AD, etc) go back to the Administration > Directory Users and Groups view and find your user account again. You’ll notice that the email address has now been updated to reflect the contents of the preferred attribute.


Pretty cool, huh?

This post was brought to you by Terrapin Beer Co’s Poivre Potion, a very unique dry-hopped pink peppercorn Saison. I love the way the spicy and sweet notes of the peppercorns play off the bitterness of the hops and the farmhouse funk of the Saison yeast strains. Delicious and easy to drink.


Happy automating!

Configuring vRA 7 for 2 Factor Authentication


One of the most exciting new features in vRealize Automation 7 is the addition of the VMware Identity Manager (or vIDM) to act as the identity provider. This brings a whole host of new capabilities, but one of the key among them is the addition of simple and flexible multi-factor authentication. This  guide will walk you through the process of configuring vRA 7 for 2 factor authentication, using Google Authenticator as our example token.

In this example scenario, a vRA 7 environment is already set up and fully functional using traditional username and password authentication. The guide also assumes you have a basic CentOS server set up and available for configuration. Both of these steps are outside the scope of the instructions below.

Part 1: Configuring the Linux Host

First, ensure that you have proper DNS resolution set up for the CentOS host. This host will act as your authentication intermediary, processing both the Active Directory username and passwords as well as the Google Authenticator token. Since AD is involved, DNS and time configuration will be critical.


Next, SSH to your Linux host. You’ll need to be a privileged user (i.e. root) for these operations, so that’s what I’ve logged in as here. Your configuration may require you to log in as a standard user and then su to root, or use sudo.


Next, you must edit the SELinux configuration.

vi /etc/selinux/config


Ensure that the SELinux policy is set to either permissive or disabled as shown below. While it is definitely possible (and probably advisable) to keep SELinux enabled for this configuration, the additional steps to do so are outside the scope of this guide.


Now, remember what I said about DNS and time being critical when integrating with Active Directory? You’ll need to set up NTP services on your host to ensure that there’s no time drift for authentication to work properly.

yum install ntp -y
ntpdate <your_ntp_server_here>

This will install the ntpdate package on your host, as well as set the time source. Ensure that you set this NTP server to the same one that provides time to your Domain Controllers – this will prevent time related authentication failures.


Now would also be a great time to confirm your DNS configuration is correct. Check the /etc/hosts file and ensure that your hostname is mapped to the correct IP address. Also check your /etc/resolv.conf file to ensure that your host is pointing at a DNS server which can properly resolve your Active Directory. In the example shown, the DNS server is set directly to our Domain Controller.


Next, ensure that the wget package is installed. Your host may already have this installed.

yum install wget -y


Great. The basic system is now set up and ready to start loading the software that will do the heavy lifting.

First up will be the PowerBroker Identity Suite, or PBIS. These utilities will enable the simple addition of AD authentication to your Linux host. The commands below will add the PBIS RPM repository so that yum can download and install the packages.

rpm --import
wget -O /etc/yum.repos.d/pbiso.repo
sed -i "s/mirrorlist=https/mirrorlist=http/" /etc/yum.repos.d/epel.repo
yum clean all
yum install pbis-open -y


Now that PBIS is installed, you can join your Linux host to your AD domain.

domainjoin-cli join corp.local <your_domain_username>

This command will actually join the host to the domain, creating a computer object and all the required Linux PAM configuration. Ensure you use a username with the rights to add machines to the domain – in the example here, we used the default Administrator account.

/opt/pbis/bin/config AssumeDefaultDomain true

Next, this command will ensure that the domain you joined is always assumed to be the default. This saves you entering DOMAIN\username notation for everything you do.


Now open up your Active Directory Users and Computers snap-in. By browsing to the default Computers container, you can validate that your Linux host is now added to the Active Directory. In this example, it is named util-01a and is listed as a CentOS 6.3 host running PBIS Open 8.3.


And while you’re in the ADUC view, create a domain group called RADIUS_Logon_Disabled in the Users container. You don’t need to add any users to it now – this will be used only if you want to deny any users the ability to authenticate against RADIUS without completely disabling their account. We’ll come back to this group later.


Now, reboot your Linux host. This ensures that all of the PBIS configuration is in full effect.

Once the host is fully restarted, log in as the same privileged account you were using before. We’re not done yet!

rpm -Uvh rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

This will enable another external repository, so that you can obtain the QR code generator that will be used with Google Authenticator…


yum install qrencode qrencode-devel git pam-devel gcc -y

…and this will grab and install all the packages and dependencies needed to build the Google Authenticator components.


Since the Google Authenticator utilities aren’t delivered as an RPM package, they’ll need to be built from source. To do that, you’ll download the source files from a Git repository and compile them directly on the Linux host. Don’t worry, this sounds a lot harder than it is.

cd /root
git clone
cd /root/google-authenticator/libpam
make && make install

This checks out the latest version of the Google Authenticator code, downloads it to your local system, compiles it and installs it. Easy, right?


This is the last package installation, honest. Download and install the FreeRADIUS server, using:

yum install freeradius -y


Now that all the packages are installed, there’s some configuration to be done. This part requires some config file editing, so be sure you’ve got your editor of choice handy and read the steps carefully – small mistakes can have big impact here!

First, the user which FreeRADIUS runs as must be changed. By default, the server executes as the radiusd user, but because we will need to read Google Authenticator tokens from every user’s home directory, it is far easier to run the service as root instead. There are of course other ways to make this possible without running as root, but they are outside the scope of this guide. In a production environment, you should definitely explore doing so.

vi /etc/raddb/radius.conf

user = radiusd
group = radiusd

user = root
group = root


Next, edit the radiusd.conf file to deny access to members of the AD group you created earlier. This is accomplished by adding the text shown here, in the “Deny access for a group of users” section of the file.

vi /etc/raddb/users

DEFAULT Group == "RADIUS_Login_Disabled", Auth-Type := Reject
 Reply-Message = "Your account has been disabled."

DEFAULT Auth-Type := PAM


Now, FreeRADIUS must be configured to accept PAM-based authentication. PAM is the Linux Pluggable Authentication Module framework, and is what makes all of this fancy authentication possible.

vi /etc/raddb/sites-enabled/default

Uncomment the line shown so it just reads "pam"


Once the FreeRADIUS server is configured to accept PAM authentication, PAM itself must be configured to use the correct mechanisms, in this case combining Active Directory authentication with Google Authenticator tokens. To do this, edit the /etc/pam.d/radiusd file and comment out all of the existing lines, then add the configuration below

vi /etc/pam.d/radiusd

Comment all existing lines by prefixing with #

auth requisite forward_pass
account required use_first_pass


Finally, the RADIUS server must be configured to authenticate the vRealize Automation server itself. This is done by pairing a shared secret with the hostname of the system. Edit the /etc/raddb/clients.conf file and add the text specified in the section shown. Be sure not to add this new client definition inside the default definition. In the example shown, the client is vra-01a.corp.local, the secret is VMware1!, and the shortname is vra-01a. Fill in your specific details instead.

vi /etc/raddb/clients.conf

client <your-full-vRA-VA-hostname> {
 secret = <your-shared-secret>
 shortname = <your-vRA-VA-friendly-name>


Now, start the FreeRADIUS server.

service radiusd restart


Part 2: Configuring vRealize Automation

Now that the Linux host is configured to process the authentication requests, you’ll need to configure vRealize Automation’s VMware Identity Manager instance to leverage it.

Log in to vRealize Automation as a Tenant Administrator.

Click on the Administration tab, then the Directories Management button on the left. Select the Connectors button and you’ll see the screen pictured. This is where you’ll configure the vIDM Directory connection. Click on first.connector as shown.


You’ll be presented with the screen below. Click on the Auth Adapters button. Notice that the RadiusAuthAdapter is Disabled. Let’s change that – click on RadiusAuthAdapter.


Here you’ll see the configuration for the vIDM RADIUS adapter. Fill in the fields as shown, substituting  the correct Radius server hostname/addressShared Secret (remember you entered this in an earlier step – in this  example it was VMware1!) and Realm prefix. The Realm prefix is your domain name, with the trailing slash character. Also, note the Login page passphrase hint has been customized – this reminder will display on the login page to help guide users to enter the correct data.

Do not enable the Secondary Server at this time – leave all the rest of the fields as-is.

Click the Save button at the bottom of the screen, then switch back to the vRealize Automation tab in your browser.


Now, click on the Network Ranges button and select Add Network Range. This will allow you to specify groups of IP addresses which will use a particular authentication config. It’s a good idea to configure just one or two IPs for testing purposes initially, so that you don’t accidentally lock yourself out of the environment.


The Network Range configuration is pretty straightforward. Just enter a name, description and IP range. Make the starting and ending IP addresses the same to specify only a single host. In this example, we are limiting this range to the local desktop. Click Save.


Click on Policies  to the left and then edit the default_access_policy_set. Remember that you can create multiple policies for multiple scenarios.


Click on the Green + sign to add a new policy rule.


Configure the policy rule as shown:

  • If a user’s network range is: <your new network range here>
  • And the user is trying to access content from: All device types
  • Then the user must authenticate using: Radius Only

Click Save.


Now, grab the icon highlighted in red with your mouse and drag the new rule to the top of the list. Click Save.


vRealize Automation is now configured to use RADIUS authentication, combining both Active Directory credentials with a Google Authenticator token.

Part 3: Enabling Users for Google Authenticator

Now that the Linux host has been built and configured and vRA has been set up to take advantage of it, you need to create tokens for your users.

Re-connect to your Linux host from Part 1 using SSH, or if you still have an active session simply switch back to it. You should be authenticated as root at this point, as you will be assuming the identity of your AD users to create their tokens.

In the pictured example, we are becoming a user named mary. Mary is an AD user who has never before logged in to this Linux host – yet we were able to assume her identity by authenticating against Active Directory. Pretty cool! You can also check that you are indeed logged in as Mary by running whoami.

su mary


Now, you’ll create the Google Authenticator token. This can be done by running the following command:

google-authenticator -tdf -r 3 -R 30 -w 17 -Q UTF8

Notice that the command creates a huge QR code, a Secret Key, and 5 emergency scratch codes. These codes can be used in the event that you don’t have your smart device handy, but each can only be used once. Keep those in a safe place. The QR code is a graphical representation of the alphanumeric secret key printed directly beneath it.


Here’s the fun part. Pick up your nearest handy smart device. It could be a smartphone, a tablet, etc. I use an iPhone, so the following images were captured there.

Search for the Google Authenticator app in the App Store, or Google Play, etc. Download it – it’s free.  Open the app.


Tap the “Scan Barcode” option and grant the application access to your camera. You can also select Manual Entry and type in the alphanumeric secret key – but where’s the fun in that?


Point your phone at the QR code generated on the screen and the app will do the rest!


You can see that the Authenticator app has automatically generated a token for Mary, showing her name and the server which she’s authorized for. The little Pac-Man thing to the right is a timer – these tokens are only good for a single use, and only valid for 30 seconds.


Part 4: Testing Your Work

To recap, you’ve just:

  • Built a Linux host to handle the task of authenticating against Active Directory and Google Authenticator
  • Configured vRealize Automation to leverage that host as an authentication source
  • Created a time-based token for one of your Active Directory users

Now it’s time to put it all together and test the configuration.

Open a new browser window. I find that an ‘Incognito’ or ‘Private Browsing’ window works best, since you probably have another window logged in as your Tenant Administrator already. Notice that you are now prompted for the username and AD Password + Google Auth Code – that was the free text you entered a few steps back to help guide your users.


Log in using the new token on your smartphone. Assuming the following parameters:

  • Username = Mary
  • Password (AD) = VMware1!
  • Google Authenticator Code (from smart device) = 098765

You would log in with a username of “mary” and a passcode of “VMware1!098765



This post was brought to you by Breakside Brewery’s Salted Caramel Stout, which was genuinely instrumental in getting me through developing this configuration. Notes of chocolate play on the nose while sea salt and fresh caramel round out the palate in one of the smoothest, most pleasant stouts I’ve ever tried.


Happy automating!

Special thanks to Ed Kaczmarek for contributing to this guide – follow him at @edkaczmarek!

An Elephant Named Multitenancy – Multitenancy in vRealize Automation

I had the opportunity recently to spend a few days in sunny Florida with a group of VMware’s Professional Services leaders. The week was spent discussing, demonstrating and teaching them all about the newly released vRealize Automation 7. We focused on how this release could deliver on the promise of truly flexible, extensible automation and enable our customers’ journey to the cloud. But across many of my sessions and discussions, it became obvious that there was a looming question – an elephant in the room.

That elephant’s name? Multitenancy.

I wanted to take a little while and outline what Multitenancy really means, from the formal definition to the way that VMware implements the concept in vRealize Automation, and what that implementation really means for you, my readers, co-workers, customers and friends.

Let’s start with the formal definition, for which I will reach out to Gartner. I chose Gartner as my source for this definition because I believe that this is where many people first saw the term become popular.

According to :

“Multitenancy is a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated. The degree of logical isolation must be complete, but the degree of physical integration will vary. The more physical integration, the harder it is to preserve the logical isolation. The tenants (application instances) can be representations of organizations that obtained access to the multitenant application (this is the scenario of an ISV offering services of an application to multiple customer organizations). The tenants may also be multiple applications competing for shared underlying resources (this is the scenario of a private or public cloud where multiple applications are offered in a common cloud environment).”

Whew, that’s a mouthful. Let’s break it down a little bit, as it specifically pertains to vRealize Automation (vRA,) starting with an example.

Frank works for the finance department of a large company. He is responsible for deploying a new instance of a SQL-based financial application. The database for this application contains very sensitive company data that must be protected from unauthorized disclosure – both within the company and to external parties.

Andy, on the other hand, is an intern with the application development department of the same company. He needs a new external, forward facing web server to be provisioned. This server will be available to everyone in the world – both external and internal users.

And finally, Isaac is a system administrator with the IT department at our fictional company. He is tasked with configuring and maintaining the vSphere environment, storage systems and vRA instance (busy guy!)

Now, as the management plane for a hybrid cloud solution, vRA positions itself as a manager of managers. It inherently has visibility into all of the resources that it is responsible for managing, allocating and providing access to. These resources, as defined above, are shared – that is the very nature of cloud computing. Things like compute power in the form of vSphere Clusters or vCloud Air blocks, abstracted storage capacity in the form of vSphere Datastores, network access, etc. One of vRA’s many responsibilities is to act as an authentication and authorization solution – ensuring that while Frank and Andy can both log in to the same portal, they can only see the resources that they have been granted access to.

That means that Isaac must guarantee that Andy cannot access that sensitive financial data that Frank is working with. And while Frank may be a skilled DBA and analyst, the last time he wrote any code was back in the Cobol days – what business does he have deploying web services? None, that’s what!

This is a classic multitenancy example in a cloud-oriented world. Apply the same principles to your own organization – you should be able to see the parallels immediately. Replace Finance and Development with Test and Production, or with your Palo Alto and Washington, DC branch offices.

Here’s where the good news starts. vRealize Automation has been designed to account for exactly these principles. Inside the platform, VMware has provided quite a few layers of access control. Let’s explore some of them in greater detail.

  • Tenants – These are logical divisions that set boundaries for all of the stored objects and policies inside of vRA, with the exception of the endpoints and collected fabric groups. Tenants provide the opportunity to apply unique branding to the vRA Portal and login splash screen, have dedicated authentication providers and other Tenant-dedicated services (e.g. a unique vRO instance). A Tenant Administrator can delegate the management of objects and resources within their own tenant to other users inside their organization. While handy for logical separation, vRA’s Tenant construct is not frequently used in production deployments, as the following constructs usually provide more than adequate segregation without increasing administrative overhead. VMware’s best practice is to leverage a single Tenant and multiple Business Groups.
  • Fabric Groups – A Fabric Group is a policy that defines the relationship between heterogeneous compute resources and the authorized administrators who can slice them up into virtual datacenters (VDCs) for consumption, also known as Reservations. This means that an administrator like Isaac can select certain “chunks” of available infrastructure and assign it for use by specific sets of consumers. Examples of an “infrastructure chunk” can include a vSphere Cluster, a vSphere Datastore, an AWS instance or a vCloud Air ovDC. Once these “chunks” are designated as part of a Fabric Group, the responsible Infrastructure Administrator can determine how they are allocated among consumers across Tenants.
  • Reservations – Reservations are a way for the Infrastructure Administrators to determine how much of an “infrastructure chunk” a Business Group can consume. For example, you might have a vSphere Cluster for your Production environment which contains 512GB of pRAM and 5 1TB Datastores. Once you have created a Production Fabric Group, a Reservation can be used to determine that the Finance Business Group gets 64GB of pRAM and 2 of those Datastores, while the Application Development Business Group gets 448GB of pRAM and access to the other Datastore. By doing this, Isaac the Infrastructure Administrator has ensured that the shared cloud infrastructure has been logically segregated, and that Andy can’t over-consume the shared resources and put Frank’s production application at risk.
  • Business Groups – A Business Group is a logical grouping of users, which can contain both standard users as well as Managers. A standard user is able to log in to the vRA portal and select items from the catalog – subject to their Entitlement, which will be covered in a moment. Managers are responsible for configuring the governance of deployed workloads as well as the membership of the Business Group. Managers can also request and manage items on behalf of the users who they manage.
  • Entitlements – Finally, the Entitlement is a way to refine exactly what a user of the platform can see and do in the catalog and with their deployed items. And, the actions granted in an Entitlement can be tied in at any point to vRA’s multi-level Approval engine to add management oversight and governance if desired.  So, for example, a Java developer could be permitted to deploy only Linux servers with an Eclipse IDE pre-installed, while an MSSQL DBA could deploy only Windows servers with SQL Server installed, pending his manager’s approval. Both could be permitted to power cycle their machines, while only the Java developer might have access to destroy his system.  These are just examples – an Entitlement can be configured with virtually limitless combinations of actions. abilities and approvals.


So, in our scenario, Isaac would use the Default vRA Tenant for his organization – since it’s all the same company, no special branding is needed. He would then create a Fabric Group that encompasses the appropriate computing and storage capacity for his departments – whether public cloud based like vCloud Air or private cloud, like an on-premise vSphere environment. Then, he would make two Business Groups for the Finance and Development departments – each with a Reservation specifying how much of the fabric capacity the department could consume. Finally, he would configure Entitlements that determine which users/Business Groups can deploy and manage which types of Blueprints.  The image above illustrates that while in some cases Blueprints may be shared, the ability to manage workloads provisioned from those shared Blueprints has been limited by the Business Group.

For example, in the scenario pictured, Developers can provision Blueprints 1, 2, and 3 – and Finance users can provision Blueprints 3 and 4. But each resulting object belongs solely to the Business Group that created it – the shared nature of the original Blueprint has no impact on the provisioned server.


The segregation outlined in this scenario is more than sufficient to meet the multitenancy requirements of virtually every customer. And when you add the incredible power of NSX Micro-Segmentation to your environment, the levels of isolation you can achieve between deployed machines, network segments, organizational units, etc is simply unparalleled.

Now, one potential concern that I have heard customers raise is that an Infrastructure Administrator can “see” the available resources attached to every endpoint. Well, yes and no! In our scenario above, Isaac definitely knows that compute clusters and Datastores dedicated to both Development and Finance exist – that’s his job, after all! But vRA does not enable him to browse those datastores, or manipulate the provisioned servers owned by those Business Group members. Since vRA is positioned to be that manager of managers, owned and maintained by the cloud (or IT) team of an organization, does the simple awareness of the existence of these resources really present any risk?

My argument would be that no, outside of the very rare scenario of a formal carrier-grade service provider – it doesn’t.

I hope this post has helped to clear up some of the confusion around Enterprise Multitenancy in vRealize Automation and helps put some of the FUD around this concept to rest.

Please feel free to leave any feedback you might have in the comments by clicking “Leave a Comment” at the top of the article. I’m very interested in hearing what others think about this topic.

Brewmaster Jack Garden of Grass

And of course… This post was brought to you by Brewmaster Jack’s Garden of Grass American IPA. This fantastic fresh hop beer sports a rare and “experimental” hop varietal known as HBC 452, which imparts a great and juicy watermelon flavor that mixes with the distinct piney-ness of Simcoe hops.

Happy Automating!

vRA 7 – Editing Machine Blueprint Settings

So with the recent release of vRealize Automation 7, I have been showing it off at every chance I get. Whether it’s to customers or to our own internal employees – the response has been overwhelmingly positive!

One thing I have had more than one person ask me about, though, is whether or not you can edit the settings for a Machine Blueprint after it’s been deployed.

The answer is yes, but I understand why some folks may overlook where you can do this.

First, enter the Design tab and edit the Blueprint you wish to modify by clicking it.

Then, just look for the Gear icon next to the Blueprint name and click it:


This will bring you to the Blueprint Settings dialog, where you can modify the name, description, NSX settings, lease times, etc.


This post was brought to you by the St. Supery 2009 Dollarhide Petit Verdot. Dark, almost inky black and full of vanilla oak and bold tannins, its multi-layer complexity goes great with a Converged Blueprint.


Happy automating!

Reflecting on Hands-On Labs at VMworld 2015

Now that I’ve had a day or two to decompress after another action-packed VMworld, I thought it would be appropriate to just post a few thoughts about the experience.

I became involved with the Hands-On Labs shortly before VMworld 2014, making this my second cycle with the program. At the time, I had no idea how difficult or how rewarding the experience would be. As it turns out, participating in the Labs has been one of the single most personally and professionally satisfying undertakings of my life.

The development cycle began back in February of 2015, when a few of my fellow captains and I began developing what would be known as the “SDDC Base Pod” – a fully integrated single-site environment based on vSphere 6.0. This pod would contain all of the necessary components to showcase VMware’s Software-Defined Datacenter. Once extensive performance and integration testing had been completed, the pod was saved and made available to the rest of the individual lab development teams. This happened around May – and is when we really began creating our lab-specific content. All in all, each of us has contributed 500+ hours to the development, testing and delivery of this lab.

Working with Kim (@KCDAutomate), Shawn (@ShawnMKelly) and Grant (@GrantOrchard) with Burke (@TechnicalValues) as our leader, we laid down the additional software components, configuration, development and documentation to create the 8 amazing modules which comprised our 2015 lab. I’m pleased to be able to reveal the details of the lab now that VMworld has concluded:

HOL-SDC-1632 – vRealize Automation Advanced: Integration and Extensibility

A list of the modules is as follows:

  • Module 1 – You Need More Integration
  • Module 2 – An Introduction to Extensibility
  • Module 3 – Integrating vRealize Automation with the VMware Cloud Management Platform
  • Module 4 – Integrating vRealize Automation with Infoblox IPAM
  • Module 5 – Integrating vRealize Automation with Puppet Enterprise
  • Module 6 – Integrating vRealize Automation with NSX
  • Module 7 – XaaS Services with Advanced Service Designer and vRealize Orchestrator
  • Module 8 – Working with the vRealize Automation API

Each of the above were lovingly handcrafted by our team to show off not only the power and flexibility of the vRealize Automation engine, but also the amazing ways that it can be integrated into the other components of the VMware Cloud Management Platform as well as third party solutions that might already exist in your infrastructure.

But creating the labs are only the start. Delivering the content at both VMworld events and supporting it throughout the year is when the real work begins. The amazing Hands-On Lab staff works tirelessly to make sure that every attendee and lab user has a seamless, enlightening, engaging and enthralling experience. There are core staff, support staff, principals, captains, proctors and administrators. All of them play a role in making sure that the premier hands-on learning event in the industry can be a reality, and they all deserve huge thanks for their roles.

According to the surveys we received, our lab was a resounding success – as were the Expert-Led Workshops we hosted to teach our customers all about extensibility.

But, of course, events like this can’t be all work. We have plenty of fun too – and I’m very pleased to be able to call so many of these rockstars my friends, and want to thank some of them. Particularly:

  • Jad, Chris and Tina for wrangling all the staff and dealing with all the administrative work that’s so important with this many staff
  • Kim, Grant, Shawn and Burke for being the most amazing team I can think of. We’ve helped each other learn and grow so much in such a short time, and it’s been incredible
  • Doug, Bill and Dave for supporting us as we built, tested, reimagined, rebuilt, re-tested and rebuilt the environments time and time again
  • The rest of the principals, captains and proctors who helped create all the other content and made the lab room the bustling hive of expert conversation it was

That’s all for now – we’ll see some of you in a few weeks at VMworld in Barcelona – and keep an eye on the Hands-On Labs portal for this year’s content to be available to you at home!

Now if you’ll excuse me, my grill is hot and these rib-eyes are calling my name. Paired with a 2011 Miner Oakville Cabernet, I don’t think I can wait much longer.