Enterprise Mobility and Security – The easy way as Excalibur rises from the ashes.

Microsoft Enterprise Mobility and Security – The Easy Way

On Friday (4/11/16) I ceased to be a full time employee working for Microsoft UK Ltd. In most circumstances I would have found this a rather disconcerting, even sad day. But NO!

One thing never changes at Microsoft and that is the speed of change! Rapid hardly describes it. The department I worked for changed their focus in July and I have been looking around for my next role either in or out of Microsoft.

The ideal opportunity arose, to deliver IT Camps and other training and event services to Microsoft as a vendor and to carry on my Microsoft Training and consultancy business which had to stop when I joined full time. And training is my real passion.

Having made the decision, resigned and come up with a mutually convenient date for this to happen, I had to restart my old business, Excalibur Services (UK) Ltd.

Excalibur Services Deploying Microsoft Enterprise Mobility

Excalibur Logo

The obvious path for me to go in the productivity and security world was Office 365 and the Microsoft Enterprise Mobility and Security portfolio.

I have been using, training and evangelising all of the products for a long, long time. I have been telling people just how easy it is to setup, deploy and use.

But the startling truth hit me this week when I was faced with buying it all and starting from scratch.

I do not have a big company (YET! Growth mind-set is key in all business) so I bought 5 seats of Office 365 E3 and 5 seats of the Enterprise Mobility and Security E3 suite as well.

I have also converted my training and testing lab into a small business data centre but that is for a different post.

Having handed back my excellent Microsoft Lumia 950 phone, I also bought a much smaller iPhone SE and a nice large 4G data plan. The next step was to set up the accounts and the policies to protect my mobile devices.

I allocated a day to do all this.

I hope my estimation skills are honed a little more when estimating work for clients. It took me less than 15 minutes to sign up for the Office 365 subscription and add Enterprise Mobility and Security licences. In addition in the same time I also set up the compliance and conditional access rules, joined my devices to Azure AD (Windows 10) and added my iOS devices to the party through the Intune Company portal.

The last job was to ensure that my conditional access and exchange activesync rules were working.

Enterprise Mobility Deployed

The Company Portal Showing devices deployed

15 minutes start to finish. (getting an Apple cert took the longest to allow iOS devices in).

A short exercise deserves a short post! My shortest ever even

The next post will show the steps I took and help you to join the @MSIntune party.

Sign up here.

Azure AD administration reaches the new Portal

What better birthday present could I have from Microsoft than to read Alex Simons blog post. Azure AD administration is in preview in the new portal. Hooray!

To explain why this is so important and such a bit thing I need to outline a little bit of Microsoft Azure history.

Azure V1.0

First there was Windows Azure (let’s call it V1)

This was based on Azure Service Management (ASM) and used the Azure portal as shown below. (This is now known as the Classic or Management Portal). The URL points to the legacy nature of this portal http://manage.windowsazure.com

azure ad administration

The services available in this portal grew over time from a few to lots! This portal was designed to be used and accessed by Subscription administrators and co-owners. Everyone who had access to the portal had full admin rights over the objects. Which in most day to day usage is a severe disadvantage. In lower level management it meant that bulk actions became tricky and starting and stopping multiple Virtual Machines (VM) was also a problem.

Azure V2.0

This led to the introduction of a brave new world of Microsoft Azure (or v2) which was based around Azure resource management (ARM). This is represented to the world through the New Portal (which has had names such as Preview Portal, Ibiza Portal and now just the Azure Portal). The URL is now http://portal.azure.com as shown below.

azure ad administration

This new portal opened the possibilities of full Role Based Access (RBAC) and was aimed at a subset of users of various levels of rights and permissions. It also introduced resource groups which are buckets that can contain multiple objects of multiple classes, so VM’s, Networks, etc. This allowed administrators to create a Resource Group and provide a whole environment for a number of users or developers to make use of. All the resources in that group can be started , stopped and deleted as one.

azure ad administration



Each view of the portal is limited to one Azure Active Directory but the user can scroll through each Azure AD to which he has access. This new ARM model also allows for objects to be created using templates, in this instance Azure uses JSON templates 9java Script Object Notification). This allows for scripting and automating just about everything that Azure can do.


Due to the nature of the beast there is currently a need to use both Portals to have complete control of your Azure subscription, one of the most important features missing from the ARM portal was Azure Active Directory administration. As of 12/9/2016 this has now been put right. The preview is live.



Azure AD Team news

The blog post from the Azure AD team does an excellent job of outlining some of the features, so its not worth repeating here.

azure ad administration

The interface is logical and works well for me, its easy to see many more things at a glance that you could in the classic portal.

Here you can see the steps to configure resetting passwords.

azure ad administration

To me this is the coming of age of the new Portal and very soon I will have no need at all (other than top level subscription functions) to visit the http://manage.windowsazure.com site.

There is a quick video tutorial here

Your Azure AD Administration

My top tip is to add the Azure Active Directory administration tab to your permanent menu of items.

To do that from the main portal dashboard, click More Services and type Azure in the search box at the top – to the right of the Azure Active Directory PREVIEW is a star, click the star

azure ad administrationazure ad administration

The Azure Active Directory tab will then be pinned to the dashboard services list.

Over the next few months this will become the home of Azure AD administration, an excellent start with some seriously good features. I will be investigating further and reporting back!

Bare metal Nano

Last time I went through the quick and easy way of creating and connecting to a Nano server VM using a VHD.

This is great for all your virtual needs, but with the ability of a Nano server deployment image to act as a hyper v host, a failover cluster node and a scale out file server using Storage spaces it make sense to deploy Nano to a bare metal device. This enables far more of the device resources to be used for services and not for infrastructure and enabling services.

It is a very different ball game deploying Nano on a Bare metal device, this post is designed to show the process from concept to reality.

Deploying Nano to bare metal

There are three stages

1. Build a Nano Server image

2. Build a WinPE boot device

3. Deploy the Image to the Bare metal machine


Build an image

The first stage, as before is to create a Nano image file. In this case since we are deploying an image and not booting from a VHD, we need to create a.WIM file (Windows Image File) the process is identical to that in the previous post. Below is a PowerShell command line that will create a .WIM file and add to the image the ability to act as a Hyper V host, a Failover cluster node and also the storage features.

New-NanoServerImage -Edition DatacenterDeploymentType Host -MediaPath e:\ -BasePath c:\nanoserver\base -TargetPath C:\NanoServer\target\bootnano.wim -ComputerName baremetal -Compute -Storage -Defender -Clustering –OEMDrivers

The detailed breakdown is this

-Edition Datacenter  (Options are Standard or Datacenter)

-DeploymentType Host (Options are Host or Guest)

-OEMDrivers (loads bare metal drivers rather than guest VM drivers)


Build a WinPE bootable USB device

Having obtained your .WIM file it is time to build a WinPE boot device (USB) to do that you should download the Windows Assessment and Deployment Kit (ADK) and then run the Deployment and Imaging Tools Environment this is essentially a command line environment where the Windows Preinstallation Environment runs and enable the installation of WinPE to other devices.

Note: This must be run as an administrator (indicated in the window title bar)


two commands are required, first copy the WinPE environment to a staging folder on your hard drive.

copype amd64 c:\WinPE_amd64

then to install that on a USB device

MakeWinPEMedia /UFD c:\WinPE_amd64 E:

(where E: is the drive letter of the USB device and this device is then reformatted and made bootable)

The next step is to copy the newly created.WIM file to the root of this new USB device so it is available for deploying to the hard disk of your new bare metal device.


Deploy to Bare metal


The final stage is to deploy this image to the hard disk on the bare metal device. I chose to do this on a Gigabyte BRiX i7 device. This makes screenshots quite difficult but I did video the process which will be posted here when it has been tidied up..

The steps to do this are also listed below.

I plugged the USB device in and booted the BRiX, to boot form USB (on my device this involves pressing the DEL key until the BIOS screen loads and setting the boot device to USB)

When the system boots into WinPE from the USB device, the WPEInit command runs and sets the system up for action (it can take a couple of minutes to do this). Once the command prompt appears there are a series of commands you need to run.

The utility which carries out most of the work is Diskpart.exe (the command line version of the Disk Management program in Windows)

The short version is

1 clean off the disk (wipe it and remove all partitions)

2 make it a GPT rather than MBR disk

3 create three partitions

EFI (S: FAT32 System volume 100MB)

MSR (recovery partition)

Primary (N: NTFS WIndows volume)

The next stage is to run DISM (Disk Imaging and Servicing Module) this allows all sorts of magic with online and offline images (WIM files). The DISM command simply installs the image on the Primary partition

The final stage is to update the Boot database to point to our image and reboot using WPEUtil into a brand spanking new Nano Server.

The Long version is.

The commands for all these are shown below



Select disk 0


Convert GPT

Create partition efi size=100

Format quick FS=FAT32 label=”System”

Assign letter=”s”

Create partition msr size=128

Create partition primary

Format quick FS=NTFS label=”NanoServer”

Assign letter=”n”

List volume



Dism.exe /apply-image /imagefile:.\<yourimagefilename>.wim /index:1 /applydir:n:\ 


Bcdboot.exe n:\Windows /s s:


Wpeutil.exe reboot

This is obviously a manual process which could be automated using WDS and either a VHD or a WIM.

But for a one off setup it takes around 5 minutes. Not bad for a system capable of  Hyper V , Clustering and Storage services.

How to deploy Nano server.


So, if you are an IT Pro you will have heard that Microsoft is releasing a new Server operating System in the Autumn of 2016. Cunningly branded as Microsoft Windows Server 2016. No waste in marketing creative brand design $’s there then, which is a very good thing. Why?

Well, it leaves the whole budget for creating what is, in my opinion the biggest thing to hit the Server operating system world in 20 years. To see what is new in Windows Server 2016 check out this link on TechNet and if you have more time available there are a series of really good Microsoft Virtual Academy resources here.

Currently the Technical Preview of the new Server OS is at TP release 5. This post will concentrate exclusively on this release.

Over the last few years the feedback about Operating systems in general and server operating systems specifically has been very direct and vocal.

Why do we need to install everything if I only want to run a file server?

Why do I have to reboot so often?

Why is the image so big that it becomes hard to store, move about and install?

The first attempt to resolve these issues was in Server Core released as an installation option in Windows Server 2008. A command line only version of the Server OS that can be managed remotely and to a limited degree from a direct console This ‘server core’ did away with a lot of extraneous ‘stuff’ and meant fewer updates, smaller images and smaller, quicker installations.

But this was not enough and so the Server Product team in Microsoft went back to the drawing board and produced a deployment option now known as Nano server. This cannot be installed from the DVD or ISO, it has to be installed using PowerShell and each individual image built up to only contain the roles and services that are required for that particular server.

If you fancy trying it now and don’t want to step through it in this post with me, then head off to http://aka.ms/nanoserver this is the landing page for the Getting Started with Nano Server walkthroughs. You could also do worse than to head over to Channel 9 here for the Nano server channel. or here for the Windows Server channel

If however you would like to step through how to deploy this innovative new server that can sit in as much as 150MB or RAM and on a VHD of as little as 450MB in size, then read on.

Deploying a Nano Server

There are several ways to deploy a Nano Server. You can deploy a bare metal bootable image, a boot to VHD physical host and a VM image. All three require different tasks and commands. This post will concentrate on the VM method as this is the easiest for a new user to get up to speed with. Future posts will cover the other scenarios.

The first step is to download the Windows Server 2016 TP5 ISO. If you have an MSDN subscription you know where to get it from, if not you can sign up to evaluate the Preview here. Just sign in with a Microsoft Account and download the ISO.


You can also see that a pre created Nano VHD has been uploaded for you. I would recommend downloading both. This post will not use that VHD but will show you the steps to go through to create your own. It is, however, useful to have one sitting there ready to use.

The final way to evaluate a Nano server is by deploying one to Microsoft Azure, the Microsoft Public cloud infrastructure. You can sign up for a free trial here. But it is a much better idea to take advantage of the new IT Pro cloud essentials offer here, which gives a longer trial with more money to spend.

So to be able to follow these steps, the minimum you require is.

  1. ISO for Windows Server 2016 TP5
  2. A  PC or server with an operating system and a hypervisor.
    1. Windows 8 or 8.1 or 10 with Hyper V installed.
    2. Windows Server 2008,2012 or 2016 TP5 with the Hyper V role installed

The instructions I will use will show screenshots from Hyper V running on Windows 10. Hyper V does not come installed by default so you can follow  these instructions on Windows 10.

The first step is to mount the TP5 ISO, to do this copy the ISO to a folder and right click on the file. then click MOUNT (alternatively double clicking the file mounts and opens the ISO, in my case on a Drive Letter R:).

You will see a folder named NanoServer


Double click that folder to see the contents


The next step is to copy the NanoServerImageGenerator folder to your hard disk.

The result is shown below


Now for the good stuff. We need to run PowerShell to be able to create a VHD containing a NanoServer deployment image. Make sure you run PowerShell as an administrator (right click the icon and run as administrator.


I would always run the PowerShell ISE (Integrated Scripting Environment) as it provides a better environment to learn and understand the magic that is PowerShell.

If you do not use PowerShell often, you may well have to set the system to allow you to run local scripts that are not secure or signed.

Do that with this command.

Set-ExecutionPolicy RemoteSigned , type that in the white script area (if you only see blue then click the View Menu and make sure view script pane has a tick by it.


The click the Green Play symbol or press F5 to run the script pane (to run a single line, select that line and click the play selection icon, one to the right or press F8), accept any warnings or offers to save.

You will not get a result just a new prompt line (PowerShell does not give any feedback unless it is asked to or unless there are errors or warnings)

It is good practice to type all your commands in the white script pane on a separate line and run them line by line. This means you can save the commands as a script when you are done. Saving you retyping next time and even learning.

Now change directory (folder) to where you placed the ImageGenerator folder in my case that is the L:\ drive root folder. then type

import-module .\NanoServerImageGenerator –verbose

I have added the verbose switch so that you can see the commands (or cmdlets that are imported)


You are now able to use the CmdLets above to create your Nano Server VHD. There is an awful lot of work that goes in to create this VHD but you can do it with one very simple command.

Type (all on one line)

New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath R:\ -BasePath .\Base -TargetPath .\NanoServerVM\NanoServerVM.vhd -ComputerName Nano1 – compute –storage –clustering

To break this command down would help, I am sure

New-NanoServerImage (This calls the CmdLet you imported)

-Edition Standard (This switch sets either standard or datacenter edition (new in TP5))

-DeploymentType Guest (This switch defines whether the VHD is for a physical host or a guest VM)

-MediaPath R:\ (This is where you mounted your Server TP5 ISO file)

-BasePath .\Base (This is where you are going to copy the installation files and packages, .\ signifies the current folder)

-TargetPath .\NanoServerVM\NanoServerVM.vhd (This is the full path including filename to your output VHD or VHDX)

-ComputerName Nano1 (This is the internal computer name)

There are many many more switches, these allow you to install roles and features into your image. This can be done at creation or by using PowerShell or DISM after the image has been built.

I want this VHD to be a file server, a failover cluster node and a hyper v host, so I need to add the following switches to the end of the command.

–Storage –Clustering –Compute

Then run that command by selecting it all and pressing F8. This will take some time and will take longer the first time you run it as the CmdLet copies the media files to your hard disk and creates a base Nano VHD as well. Not all these tasks are required for future image creations.

The script will run and will ask you to enter an Administrator password. I recommend P@ssword! so that you don’t forget it (as I have written it down) This will be the local administrator password. You can join a Nano to an Active Directory Domain (although Group Policy will not be applicable to Nano Servers).


When finished, you will have two new folders as shown


Vase contains all the NanoServer software, a .wim file and the packages to install inside and the NanoServerVM folder contains your new VHD.

This one is 674MB in size, not bad for a Hyper-V host, file server and failover cluster node.


Now that we have the VHD we need to use Hyper-V (or PowerShell to create a VM) – I will use PowerShell

I am going to assume that you already have a Virtual switch in your Hyper-V manager (if not use this link to do that now)

The PowerShell to create a VM for my circumstances is below, change it to suit your drive letters and paths.

New-VM -Name EdsNANO -SwitchName Internet -Path L:\NanoServerVM -VHDPath L:\NanoServerVM\NanoServerVM.vhd

Once this has completed with a result as shown


You can then head on over to Hyper V Manager and start the VM and connect to it.

The VM takes about 6 seconds to start and connect and will show this screen


Enter the Administrator username and password and you see the Nano Server Recovery console as below


The Nano Server is not designed to be administered locally but uses any r all of the traditional server management tools (and can also be managed from Azure – more of that in a different post)

From this point I suggest you explore the local configuration possibilities.

To connect to the Nano Server remotely we need to get back to PowerShell, we can either use the new Windows 10 PowerShell direct feature or connect to the IP address of the server.

So from PowerShell use this command

Enter-PSSession -VMName EdsNANO

resulting in this output


You can see from the revised command prompt, that we are now working directly in the EdsNANO VM.

Check this by typing a number of commands to see what you have



get-process .

You can also look at the Hyper V manager in the Memory tab of the VM


a total of 220MB for a running server

And then inspect the virtual disk from the VM settings menu.


A full running server in a hard disk of 606MB

All pretty staggering.

Future posts will show what we can do with this great new technology, but it is only part of the full plan. Nano Server will NOT run all workloads and anything that won’t run in Nano will run in Server Core.

Happy practicing

PowerShell:- My top 10 CmdLets to build a travelling lab.

So it doesn’t take much scanning of my Blog and other ramblings to realise I am a great fan of PowerShell. The ever demanding Harry (the TechNet UK Editor) suggested that I ought to write about my top 10 PowerShell tools that I use in creating my lab environments.


Now, the awesomely talented Andrew Fryer has already created a well read and long lasting Lab Ops series on his TechNet blog page. So there is no need for me to repeat any of that.

I use a mixture of platforms for my labs and demos.

My on premises ‘datacenter’ consists of;

HP Z600 Workstation twin Xeon 2.4GHz CPU’s (E5620) each with four cores and hyper threading enabled with 24GB RAM and 1TB of OS disk and 2 TB of data disk (with lots of iSCSI available from the various NAS devices around the place). This system runs Windows Server 2012 R2 in a Workgroup with Hyper-V

Plus I have two HP Gen 8 Micro Servers these have Pentium G2020T at 2.5 GHz twin core non hyper threaded with 16 GB of RAM and 2 x 500GB disks, these systems are Windows Server 2016 TP4 member servers running Hyper-V, IS and File and Storage services.

Cable is Cat 5E , switch is a web managed Gigabit (changed often so make is unimportant)

This allows me to run most scenarios to test and demonstrate Windows Server technologies.

I tend to manage the deployment of these physical hosts in a manual way to test the various setup options.

All of this is stored neatly in a secure underground bunker AKA – Ed’s garage – See pic



When I want to test WDS, MDT ADK etc. I use my Gigabyte BRIX systems. I have four of these to test the various storage and clustering options to destruction! (See below)


This is one machine short of ideal as I have to use the main DC as a cluster node as well. (Currently Storage spaces direct requires 4 nodes)

My Cloud data centre is, of course Microsoft Azure, sign up for a free trial here

But this post is about preparing and deploying a repeatable Lab setup on a single smallish Portable Workstation (Dell Precision M6700), it is not about automating the repeat (that will come later in a separate post or possible a number of them)

So – to the PowerShell  – NOTE – this post will not give you a block of code you can use but it will give you the ideas needed to start learning how to do this yourself.

The aim of this post is NOT to show a completely automated lab creation system but to detail the ten CmdLets I recommend you to use when building your own lab.

With the advent of Windows 10 (1511) and Server 2016 Technical Preview 4, Microsoft has introduced an additional virtual switch into the mix. The NAT (network address translation switch) this will allow you to segment your VM’s on any IP scheme you choose and not interfere with your physical host and still have external network / internet access.

1. New-VMSwitch is the CmdLet I use to create this new switch. There are many useful blog posts about how to set this up, this one is very easy to follow.

2. New-NetNat is the other CmdLet you need to finish the job, see the screenshot below


This pair of commands creates a new VMSwitch and a host based NetNat object to allow communication through. See the commands required above.

These are fairly innocuous commands but they then allow me to isolate my lab setup completely and place it on any IP scheme without worrying about external connectivity.

The Hyper-V Switch manager GUI hasn’t yet caught up with the game as you can see it is shown as an internal switch.


But if you head off to network connections you can see a new vEthernet switch has been added and it has the correct Ip settings.


2 down 8 to go.

Having set up the networking to allow my lab infrastructure to operate in isolation using Network Address Translation I now need to stick some Virtual Machines into the mix.

I need to keep my storage requirements quite low on a travelling lab. I also want to ensure I am using the most up to versions of software. (Windows Server 2016 TP4 currently) and Windows 10 Insider builds for clients.

I have 32GB of RAM to play with so should be able to host a number of chunky VM’s in that memory space using Dynamic memory, but the first consideration is my Virtual Disks.

I am choosing to forego performance in the name of space. I have decided to use differencing disks for my servers with all of the main server instances running on a full GUI version of Windows Server 2016 TP4. (I know Jeffrey Snover would not be at all impressed but when demonstrating it is often useful to show the native tools without needing to fire up a separate VM and the RSAT tools)

So I need to use some PowerShell to get hold of a VHD image and then make some changes to set it up as a parent disk and create some differencing disks.

My lab will need the following servers

  • Domain Controller (with all the main plumbing roles ADDS,ADCS,ADFS,DNS,DHCP)
  • File Server
  • Exchange Server
  • SharePoint Server
  • Web Application Proxy

This will then allow me to use this setup as a good ‘On-premises’ demonstration of integration with Azure Active Directory and Office 365 / Enterprise Mobility Suite.

So to the VHD image I need. The best way to do this is to download the super-cool Convert-WindowsImage script from the TechNet gallery. This allows you to create a fully sysprepped VHD or VHDX from either a WIM or an ISO.

The  Windows Server 2016 TP4 ISO can be obtained either from MSDN if you have a subscription, or from the evaluation centre here

3. .\Convert-WindowsImage.ps1 also has a UI you can call as shown below (but you cannot use all switches and parameters if you use the UI)


The full command you need to run is something like this

.\Convert-WindowsImage.ps1 -SourcePath $imagepath –VHDPath  $vhd -VHDFormat VHDX -Edition ServerDataCenter -VHDPartitionStyle GPT –Verbose

there are many more options and uses. I am cheating here as this is a script and not a single CmdLet, but I set the rules…so I suppose I can break them.

Having created the file and put it where you want it to be nice and safe from tampering, you need to make it read only.

4. Set-ItemProperty this is a simple file-system action to turn on the IsReadOnly attribute of your VHDX.

So a command such as

Set-ItemProperty –path $vhd -name IsReadOnly -value $true  would do the job.


Note the r in the attributes (Mode) column signifies read only.

We now have a base network and switch and a base VHDX to start our super portable lab.

So we need to create five differencing disks from the Parent disk we just made read only.

This is a simple New-VHD CmdLet

5. New-VHD has a whole load of parameters to assist you to define your disk.

Something like

New-VHD –Path $dcpath – ParentPath $vhd –differencing

and you would need five of these with the path variable changing for each VM

So we have a network and disks all we need to do now is to create the VM’s

This would would be a simple New-VM CmdLet

6. New-VM the standard VM creation CmdLet

New-VM  -Name $VMName –Generation 2 –VHDPath $dcpath –path $dclocation –switchname $natswitchname –BootDevice VHD

And again we would need five of these one for each VM.

Having now built the vanilla machines we need to add all the roles and features to the servers and then install the software we want on them.

Now if you want to discover all sorts of ways to do this automagically then head on over to @deepfat’s LabOps series of blog posts.

I will just outline the necessary CmdLets to set up your Domain Controller with all the roles and features.

For this reason if you are not going to be automating all this in one script (as I am not) then my No 7 pick is  the CmdLet Install-WindowsFeature

7. Install-WindowsFeature allows you to install windows roles and features, much like Server Manager does. The difference is that by default the CmdLet does not install any management tools so you will need to include the parameter –IncludeManagementTools to your commands. The really cool thing about this CmdLet is that you can either install the features in a running VM or or to an offline VHD.

The CmdLet requires administrative credentials.

To find out what is available you can run the Get-WindowsFeature CmdLet first and find out the available Roles and Features that you need.

I will be installing

            • Active Directory Domain Services
            • Active Directory Certificate Services
            • Active Directory Federation Services
            • Domain Naming Service
            • Dynamic Host Configuration Protocol

So the CmdLet will call a parameter containing the names of those features

$features = AD-Certificate, AD-Domain-Services, ADFS-Federation, DHCP, DNS

Install-WindowsFeature –name $features –vhd $vhdpath –Includeallsubfeature –IncludemanagementTools

If you have a thin image and the features are not available withinthe image you need to add a –source parameter and place a path where the .WIM file has been mounted this parameter is only used if the features cannot be found in the image itself.

With this achieved, you can safely start your DC VM either with Start-VM or through Hyper-V manager.

Once you have done that you can use PowerShell Direct on your Windows 10 or Windows Server Host Machine to Enter-PSSession using the –VMName parameter. This is a new Windows 10 and Server 2016 PowerShell feature and allows connecting to VM’s from the host without the need for a working network connection.


8. Enter-PSSession -VMName $VMName is my eighth choice, very useful at all times.


The arrow indicates I am in a session with my TP4HOST2 VM simply by connecting directly using the VMName.

In the final Stretch now – just two to go.

Simply Installing the Windows features above only installs the binaries necessary to be able to configure and manage the roles and features. It is still a requirement to actually get those roles and features into a state to provide a useful service to the network and users.

I only have 2 CmdLets left if I limit myself to 10, So I choose to use the AD DS CmdLet to install a new AD DS Forest. This will set up a Domain to be able to imitate any real world enterprise situation.

9. Install-ADDSForest

and here Windows Server allows you to build this command up with absolutely NO experience or knowledge of PowerShell.

Simply run the Wizard to Promote your Newly installed Server to a Domain Controller in a new Active Directory Forest. At the very last screen before you finish the wizard you are given the opportunity to save the PowerShell that was generated to actually preform the promotion.

The screenshots below show you how cool this is.



The script above was generated on a Windows Server 2016 TP4 machine so the Domain and Forest Modes are described as Threshold. It can simply be added to a script to make your server a fully functional AD DS Domain Controller.

My final CmdLet could be any one of a number of choices. From configuring Certificate Services to DHCP to ADFS. But since All your networks will need all machines to have IP addresses, I will opt for the DHCP CmdLet to authorise the newly installed DHCP server in your newly created Active Directory. AD DS will not allow a Domain Controller to be integrated and take advantage of all the great integration services unless it is authorised in the Directory.

The CmdLet is

10. Add-DhcpServerInDC there are many DHCP CmdLets in the Module and we will cover more of those in a different post as we still have to set up a scope, scope options, reservations and other cool stuff like filters and policies.


Indeed as you can see below there are 121 DHCP commands in the module for DHCPServer so the choice is very wide!


I am sure that any PowerShell guru reading this post will call it lightweight or half a job. If the aim was to provide a series of chunks of code to allow an automated repeatable lab to be built by simply passing a few parameters to a function or two, then i would agree.

My aim is to help a newcomer to PowerShell to understand that there is so much you can do and for them to go out and experiment with these 10 CmdLets. There are many online examples of the turnkey solutions I mentioned, but the new user is unlikely to learn unless they at least try and build their own Lab, piece by piece.

Enjoy trying. Below are a few resources to help you on your way.

Microsoft Virtual Academy – PowerShell Courses

Channel 9 – PowerShell Resources

Microsoft Learning – PowerShell Course

Microsoft Bashes Windows 10 at //Build

I never thought I would post something with that headline. Yesterday at Microsoft’s annual //Build conference for Developers in San Francisco we were treated to the usual high quality glitzy round of announcements and demonstrations.

From new Cortana power to some great innovative technology assisting people who cannot see to understand the environment around them. All this was very much focussed on developers and getting all manner of developers to write programmes (I still don’t like saying Apps) for the Windows ecosystem using the Universal Windows Platform. This enables us to use one application on ANY type of Windows 10 device, whether it be a raspberry Pi or an 80” Surface Hub, or even the purely magical HoloLens (which started shipping yesterday).

Having listened carefully to industry insiders over the last few weeks , the rumour and speculation about what was going to be announced included all sorts of wild things like PowerShell on Linux.

One thing Terry Myerson did announce is that there will be an Anniversary update for Windows 10 this summer and it will be free for all users (all 270 Million of them so far).

This update will include a whole host of new innovative and quite frankly excellent features, from more Cortana integration, to Biometric log in to web sites using Edge, imagine that NO MORE remembering all those Amazon, Netflix or other online site passwords and usernames. Simply swipe your finger or look at the screen and Windows Hello will log you in.

All of this is cool and exciting but the real biggie is that Microsoft did not announce PowerShell on Linux (yet, who knows if that is in the pipeline).

But they did announce Linux on Windows!

Well actually to be more accurate the Anniversary update will include a Linux subsystem that will allow Linux command line tools to run natively inside a Windows 10 PC. This sparked a number of Twitter spikes for and against.

I am confident that the new Microsoft led by Satya Nadella is about as open as its possible to be, both in what we do well and what goes wrong (See Tay our new AI bot for Twitter), but more importantly open in the sense that if you have something blocking you from moving to the Windows ecosystem as a Developer then it seems that we will move heaven and Earth to help you.

The Bash Shell will be fully functional on Windows 10 allowing all those developer who know and love that command language that has been on the scene since 1989. For those interested it stands for Bourne-Again SHell (Bourne was the shell it replaced).

This will allow many more developers and IT Pros to use commands they know from Linux tools directly on their Windows 10 PC. Read all about it here.

There is a 30 minute recording all about it on Channel 9 here and a great build interview with Channel 9 live here

Scott Hanselman also has a great blog post on the topic here

And if you want the Skinny direct from Ubuntu, Dustin Kirkland from Canonical writes about it here

For real Geeks this is based on Ubuntu 14.04 and should be available to insiders within a couple of weeks and will then be upgraded to Ubuntu 16.04

Now PowerShell fans may argue (and they do) that we don’t need Bash, we need a more developed PowerShell, I simply direct those to the mission of Microsoft under Satya.

“Empower every person and every organization on the planet to achieve more”

It Pros and developers absolutely know what they want, know when they want it and know how they want it implemented. If there is a blocker to doing business and they can carry on using what they are currently using without detriment to themselves or their employers business then that is exactly what they will do.

By integrating Bash into Windows and working with Canonical (who distribute Ubuntu) to make it work Microsoft have shown that whatever it takes to help people use the tools they want to use to deliver their hard work on the Windows platform will be considered and often implemented.

Another example of this is the use of GitHub the open source code repository to distribute PowerShell module updates.


For the first time yesterday I installed an Azure Module (1.3.0) from GitHub directly. Even the source code is there.

Now that is open for you. So if you are a Linux guru and want to start using Bash on Windows, watch this space for the Anniversary update, coming your way soon.


If you aren’t an insider already – sign up here

For more of the announcements at Day 1 of /Build check here.

Follow the live stream of Build keynotes here

Server Management Tools in the cloud

There are a number of things about my job that keep me awake at night or wake me up early and none of them are bad. It’s all about the excitement of what is coming our way in the management of our servers.

One of those excitement points as I call them was October last year when Jeffrey Snover demonstrated the ability to manage the awesome Nano server technology (another excitement point) from a cloud based console. No date was given and no expectation of the true goodness that was about to be delivered.

Well on February 9th the Server management Tools (SMT) went into public preview as an Azure service.


This new service allows you to use your Microsoft Azure Portal to create multiple instances of Server Management Tools, 1 instance for each server you would like to manage.

You need to have an SMT gateway which you can install on one of your Azure VM’s or in your datacentre.

Once that is done you are free to use this portal to carry out a wide range of management functions.

The service really comes into its own when you have a number of hosts running the new headless Nano server deployment. There is no way to manage these other than by remote PowerShell, you cannot RD into the console (there isn’t one). You can connect directly to make a couple of firewall changes and checkout the networking, but that is all.

The following screenshots taken from a Nano VM (which is also a container host) show the limitations of the direct management features


Nano server is featured widely on Channel 9 and Microsoft Virtual Academy, why not brush up on your Nano server skills before reading on.

The process of setting up SMT is not a difficult one but you do need to have an Azure subscription and to make it super-easy, here is a link to a 30 day free trial with enough credit to get you going (£125)

Rather than take you on a complete walkthrough which is already available here, I have produced a @Ch9 video which lasts around 30 minutes and takes you through the whole process of setting up VM’s the SMT service, gateway and management options.

You can find that here. This is the first entry in the new @TechNetUK How-to series on Channel 9, please feel free to send in requests for How-to guides to ukitpro@microsoft.com

Having deployed the full set of SMT tools to your Nano server it is then possible to run a remote PowerShell session directly onto the server. In addition to that the PowerShell console has the full command window experience and a Script Editor to boot.


And, there’s more

You can view, edit and add to the registry of the machine


you can also carry out all the below management functions.


A whole array of powerful functions that are just not available to you on the Nano interface (as it is).

It is particularly useful to be able to join a domain, rename the machine and check on roles and services available or installed.

The one gotcha you will see in the @Ch9 video at 22 minutes 40 seconds is the failure to connect to the Nano server from SMT.

Regular PowerShell users will know instantly that it is because the machine you want to manage is neither in the same domain as the management server or is it in the TrustedHosts list.

A simple one liner run in an elevated PowerShell session on the management server will fix this.



Set-item WSMAN:\localhost\client\TrustedHosts *

Now for a production system please replace the * with the IP address of the server you want to manage. For a lab system adding the * wildcard to open up all options is acceptable.


Once done, this error will disappear.

The flexibility of this service to manage your cloud or on-premises servers is what I like most. I can now be anywhere and as long as I have web-access I can use the Azure Portal to manage my entire Nano server (and full Server 2016 TP4) infrastructure (albeit one by one).

I hope you enjoy the video and do keep checking this blog and the @Ch9 page for new content.

Just how mobile is the Windows 10 experience?

Windows 10  a Mobile experience?w10mobility

I have been cooped up at home for the last couple of weeks, unable to drive or travel very far after shoulder surgery. This lack of mobility got me thinking about my normal working life and just how mobile I am and how does Windows 10 live up to one of its main aims, to enable the windows experience to be more mobile.

I should add that even though I will be talking about Windows 10 on a phone in this post I am not using the word ‘mobile’ to equate to smartphone. It is the mobility of the experience that is key here and not working whilst moving, there are key differences.


The question I will try and answer is;

Is there anything I can do at the office on a Windows 10 desktop device, that I cannot do with Windows 10 due to my location or device on which I am trying to work?

I deliberately exclude the availability of internet as important in this question. Internet access is assumed.

So I thought the best way for me to answer this question is by looking at all the activities I engage in, at work, as an IT Pro Technical Evangelist.

So lets start with where I work and what I use to do that work.

The Office (Work Premises)


I have added the parenthesis as I am a home worker and I use my own machine at home as you will see later on.

So Microsoft have provided me with a device to use. I currently have,as my main Domain-joined device, a Surface Pro 3 (i7, 8GB RAM 256GB SSD). Yes I count myself very lucky as it is a super piece of kit. I do covet a Surface Pro 4 or Surface Book and maybe my nice business manager Phil will be kind to me later on this year. If not I will get myself a Surface Pro 4 keyboard as they seem to make the whole experience so much better. Still, I digress.

When I arrive at work in Building 2 floor 2 section H (the hot desk section indicated by the discrete red arrow above) I plug this machine into one of the lovely Dell monitors (U2515) that the aforementioned Phil bought for us which allows me some real space to work with.

From here I am able to print through the location aware printing we use (controlled with Smartcard access at the printer), I can access all the internal file shares which seem to be disappearing in favour of cloud storage such as OneDrive and OneDrive for Business.

I have wired and Wi-Fi access to the corporate network and all resources to which I  am authenticated and authorised to access.

I have access to Cortana and all of my store apps which I am signed into with my Microsoft Account for personal apps and the Business Store Apps when I add an organisational account to the store.

So – that is my aim. The same level of connectivity and functionality or productivity that I have from that desk.

The Home Office


At home due to age and infirmity I use an electrically adjustable standing desk and i have a Dell 1650 Workstation (i7, 16GB RAM 1TB SSD and other local storage from various NAs devices and Local disks) with a four monitor setup. This may sound like overkill but when i bought it and se it up i was a full time trainer and a lot of my work was done from home with online remote training. For this I needed to make sure that I had enough screen real estate and grunt to manage many VM’s and applications. Suffice it to say I do not need all this now.

I have an Enterprise version of Windows 10 running the February 1511 update. I sign in with my Microsoft account xxxxx@hotmail.com and I have used Work Access to enrol my PC into device management. Our IT department (MSIT – this is worth looking at we have a huge network) use Microsoft Intune  (part of the Enterprise Mobility Suite) for this.

This is a pre requisite to be allowed to get onto the MSIT Virtual Private Network (VPN). Since my machine is mine and is not domain joined, that is the only way I can access all the resources and applications I want in the style I want. Having done this, I also use my Microsoft employee badge which is a smart card as a source for Multi Factor Authentication (MFA). It gets better, each time I connect to the VPN, I now receive an MFA request on my Lumia 950 smartphone which is either a phone call or an app authentication request.


The reason for this is that Microsoft have made a decision to ensure all external access to the corporate network relies on phone based MFA.

If I want access to certain internal websites or resources without connecting to the VPN I can use my smart card or a fairly new feature a virtual smart card. The Trusted Platform Module on my PC stores a certificate which is enabled by my physical smart card. I still have to enter a complex PIN but I don’t have to plug in my card every time.

So I can still print to my work printers as well as my home printers. I have full access to all the work resources and applications I have when in the office. It is my machine but I have policies enforced on it from Microsoft Intune,like a requirement for a work PIN to be created (part of the Microsoft Passport system). The Intune admin could also prevent me from copying any work related data into applications that are not controlled. A feature called conditional access.

I ought to mention that the Microsoft Intune and the Enterprise Mobility Suite are additional options that have additional licensing costs.

If I don’t want to spend any extra money I still have the option to use a VPN without the additional device management provided by Intune.

I could use my Surface Pro 3 at home as I have a dock and can use two monitors with that as well. Finally I could use Windows to Go. If you are an enterprise user with a copy of Windows 10 Enterprise then you can use a wizard to create a USB stick based version to carry around with you. This can then be plugged into any device that will run Windows 10 and that can boot from USB. The device owner is reassured that you cannot mess around with his data or hardware and you are safe in the knowledge that this BitLocker protected device is safe secure and you can remain fully functional on the road, at home or anywhere else that there is a device waiting for you.

At Customer Sites or events

Now I can dream that I will get to work in such idyllic locations, but it really doesnt matter where I am or what I want to do. Armed with the Surface Pro 3 or the Windows to Go stick I am pretty much bullet proof as far as devices go.

If,however, I am caught short and only have my phone with me, i have scored there as well. I have a Microsoft Lumia 950 as my issued work smartphone. You might have gathered by now that I am a bit of a gadget geek. This device is pure magic for me. I was lucky enough to given a prototype and a trial version so have used this for the best part of a year now.

As a phone it does what it should as a smartphone it equally holds up to most competition. It is fast, the camera is excellent, it runs Windows 10 so all my Universal Windows Platform apps (UWP) work seamlessly as well. It has more RAM and a faster processor (with six cores) than any PC I had had up until at least 2008 or so.

The big news now though is that this phone will turn itself into a PC just by adding a Bluetooth keyboard and mouse and either a large screen with Miracast or add the small dock and you can use wired keyboard and mouse and up to two additional screens.

I now use this as my presentation device at almost all events and meetings. It is also under Intune management and has access to the VPN, indeed it has app activated VPN as well so any application that requires VPN access automatically connects.

This phenomena is known as Continuum for the phone and is pretty much game changing.

The only downside is that until developers have written more UWP apps, the number of apps that will run in the large screen is small. I am glad to say that all my productivity apps as well as Microsoft Edge do run in a full screen.

So now from playing films (sorry movies) and videos whilst away from home to presenting awesome slides and demonstrations it can do it all. All the other brands, are now playing catch up with turning the phone into a single device to do everything on!

While travelling

I added this as the only really mobile part as in moving while using technology. I often work when I am travelling on a  train or plane. All the above applies although I don’t often use the phone for anything other than sharing my 4G EE connection with my Surface Pro 3 to get some reasonable connectivity.


I appreciate that I haven’t covered many of the Windows ecosystem devices in this post. Simply because I don’t use them day to day.

I haven’t seen a HoloLens up close. I don’t have access to a Surface Hub at home or in my working day very often. I do have an Xbox One but rarely use it for work, other than watching great Channel 9 and Microsoft Virtual Academy content. Indeed there is a load of very good content up on the Enterprise Mobility Suite why not take a look at some lab walkthrough videos I recorded showcasing the abilities of the Enterprise Mobility Suite?


I have all I need to work anywhere on any device (even non- Windows devices using Office 365 and the Enterprise Mobility Suite).

Windows 10 fits the bill for everything I need to do.

Now to find that nice man Phil and get hold of the Surface Book I truly deserve.

The Life of an IT Pro: Past , present and future


In my day to day role I am constantly reminded of the fast pace of our new mobile-first, cloud-first world. You only have to look at the Microsoft Azure cloud services or the Office 365 products and services to see just how rapidly things change.

This constant refreshing of our IT Pro world started me thinking about the old times, the old products and what life was like way back when. I also started to get a little uneasy about what is going to happen in the future to the traditional IT Pro.

So why don’t I take you on a journey down memory lane, through to the modern day and what might be around the corner for us all in the future.


My first IT Pro role was back in 1994 when I was serving as a Police Officer in West Midlands Police. There were the usual political wrangles of procuring kit, software and resources to provide a new network, isolated from the main corporate network on which we would carry out a project to provide Crime Mapping, Human Resources planning etc.

My role was that of server hugger, sorry, Server Administrator. We faced several big issues with hardware and software but eventually went with the following:

A Pentium processor, not much RAM, a tiny hard disk running Microsoft Windows NT3.1 and five client PC’s running Windows for Workgroups 3.11 over a thin net Coax network.

The majority of you reading this will not have seen ANY of these things let alone used them. The interesting thing is that despite the tiny RAM and tiny hard disk it was plenty for what we needed at that time.

The operating system came on 3.44 HD Floppy Disks, or as one young delegate at a recent event said to me, “ah yes isn’t that the save icon?!”


To install the OS you had to hand feed the machine a huge number of these floppy disks in order, and if one was wrong, corrupted or an error occurred you had to start again.

Eventually we had the system up and running and we managed to use the MapInfo product (now part of Pitney Bowes) and its Map Basic development language to produce a very interesting crime mapping product, the first of its kind in the UK and the forerunner to a lot of today’s more intelligent policing strategies based on data analysis.

I need to add here that I had nothing to do with the development other than some rather neat Excel Macros to feed in the data. That was all down to a PC colleague John Kelly and the boss Supt Gil Oxley. (Gil went on to run IT in West Midlands Police and to form his own company providing HR resourcing software for rostering shift patterns. I don’t know what John is up to now).

What problems arose then that we still see today?



– The term DevOps didn’t exist but the common sense processes between Developers and Operations staff did. But how often was it used? Well, the team was three strong, Devs and Ops worked together and there was only one application so we didn’t have much of a problem. There was no cloud, no real internet and many fewer threats too. The one challenge I faced most often was nosey police officers taking off one of the Ethernet coax terminators and sending the whole network down.



– There was no TCP/IP stack for Windows so it ran on NetBeui, and because that dint follow the OSI 7 layer model it wasn’t allowed to run on our corp network so we had to build our own. It was slow and clunky – see above for people taking it down at will.



– Installations took forever, CD Roms had just come in but our software was distributed on Floppy Disks. Hard Disks were slow and backup took forever.

Remote Access / Enterprise Mobility


– This was only possible by modem which made it very very slow, and was only available for administrators on the server, there was no terminal services or RDS back then. There was no concept of mobile computing, especially not in a Police environment.



– These were released as Service packs once in a blue moon and were essentially a whole load more floppy disks. If there were bugs and you had an account and a support contract you had access to download hotfixes by modem from the FTP site (slowly).



– This was one of the hardest challenges and to some degree is still the same today. The difference here is that we were training from scratch with Windows. Almost all Police systems were not running on GUI Operating Systems. The average user was initially actively against the extra work needed to use the systems and didn’t like having to use the HR system for rostering either. The majority did not have PCs at home and almost none of them had any form of mobile communications, although some had new-fangled message pagers.


Even though I am a technical Evangelist and I don’t have a regular IT Pro day job, I do help out family members and friends with their business IT. So since this is current and real I will obfuscate identifiable parts of the present section.

The scenario is a small business with around 20 users. The users and management are not hugely IT friendly or literate (by today’s standards), but they all have smartphones and laptops / PCs at home.

The network in use has Cat 6 cabling and Gigabit managed switches for each segment. There are three servers; one running Windows Server 2012 R2 Essentials, one is the old Windows Server 2008 Small Business Server and the other is a Server 2008 R2 RDS Session host for remote desktop access. The new Essentials server uses Azure Backup and is integrated into the Office 365 business premium subscription. The network uses IPv4 and IPv6 and the phone system has just been replaced with a fully Ethernet VoIP system. The card payment system has also been integrated and no longer uses the telephone network but is connected through the Cable internet system running at 100Mbps.

There are 15 Windows PCs, 3 different WiFi networks and all users have full remote access both to the RDS Server and through Essentials to their desktop PC should they need it.

There is LOTS of RAM, large fast processors and masses of RAID 0 and RAID 5 hard disk arrays.

There is also a LTO 4 tape drive for backup locally which is removed from site daily.

The administration of this system is all completed out of work hours and mostly remotely.



– All applications in this network are commercial, either off-the-shelf or LOB applications, and the providers require some form of remote access to perform updates. The OS is Windows and is managed by online updates. There is no concept of DevOps in this business as there are no in-house or remote developers.



– Fast, Gigabit Ethernet fully-switched network with a small business router connecting to the outside world through cable internet and VPN.



– Installations are often carried out through remote access, take a few minutes and the BITS are always downloaded through a seriously fast internet connection. Media (if used) is either USB or DVD and the backups are rapid whether to tape or Azure.

Remote Access


– The business cannot survive in its current model without remote access. Several people work from home, the bookkeeper is often remote only. All users regularly connect to access data which has to be stored on site for regulatory reasons. Office 365 is the email solution.



– These are automated, online and fast. No longer managed by WSUS and will soon be using the Windows 10 model for client updates current branch. Servers are updated by Windows Update on a regular basis. No more FTP or disk updates. 3rd party suppliers often connect to update their applications directly.



– From users resenting PCs, they now require faster, better PCs and applications. Downtime is not accepted and the users have a hundred ways to communicate the smallest upset to service availability. All users connect to email remotely either by PCs or smartphones. OneDrive keeps them connected to work documents as does Office 365.


This is the bit I get to fantasise about; such as how easy being an admin will be in the near future.

Obviously hardware will never be the limiting factor, and applications are continuously integrated and deployed so they are always up-to-date and functioning.

The local datacentre is running on Windows Server 2016 with Azure stack deployed. Developers are using a mixture of Visual Studio with Team Services to manage deployment.

So what will I do as an IT Pro 10 or 15 years from now?



– This is the area I expect to change most in business IT systems in the future. I would imagine all IT Pros will need to understand the mind and processes of a Developer and ensure that software defined infrastructure, software defined networking and storage are all configured to provide what they need in Dev, Test and Production at all times.



– Obviously all networking will be at 50GBe from server to client and faster for the core services. The big change for the IT Pro (other than the cable chap and the rack installation guy) will be that everything is software defined. There will be one console to ensure that the whole stack from storage to compute, through networking is scalable, flexible and self-service within set parameters.



– Windows Server 2016 advances mean that all storage is directly attached and running in Storage spaces direct with Failover clusters hosting the necessary Nano-based Virtual machines. All high value data is stored on shielded VM’s protected by Active Directory-based attestation protected by HSM and TPM.

Remote Access


– Enterprise Mobility is the new normal. All users have access to thousands of SaaS apps with SSO provided by Azure AD integration. Microsoft Intune and Azure AD join enable cradle to grave management of Windows and other devices.



– These will be delivered in secure packages, through online updates, perhaps staged in cached areas of your own network and not just for Windows but for all technology that requires updates to software and firmware.



– If the More Personal Computing vision of Satya Nadella has been successful, the methods of interaction with your PC will not stop at Holograms. The guys and girls at MS Research will be seeking out new ways of interacting with and controlling our technology.

You know what that sounds a lot like NOW, just faster and everyone will have adopted it!

In all seriousness, the IT Pro needs to evolve with each generation of Developer, User and Operating System, to ensure that his or her skills are kept up to date and relevant.

Don’t get left behind – make sure you know what’s coming by checking out the Microsoft Virtual Academy and Channel 9.

After all, we live in a world where PowerShell now works with Linux and network speeds to end users have moved from 5Mbps to 10GBps.

In 1994 when I built my first network, neither PowerShell or Linux existed (Ed: Linux did, but Linux 1.0 had not actually been released at the time of the network install).

So what will happen in the future? One thing is for sure it, isn’t going to be the fastest or the fittest that survive. Rather, it will be the one who is most willing to adapt to change.

If you are an IT Pro in today’s world just ask yourself – am I skilled and qualified enough to move into the future? Why not start by coming along to one of our #InnovateIT events and see what’s changing in Enterprise Mobility, Cloud Infrastructure and Datacentre Infrastructure, not forgetting of course the Operations Management Suite. You can register for an upcoming session near you here.

#InnovateIT- UK IT Camps, new content, new events

The @TechNetUK #InnovateIT events are changing – Find out how and why.


One of the most noticeable differences between the ‘Old World’ and the new Mobile first, cloud first world of Microsoft is that the content rarely stays the same for more than a few months.

This rapidly changing theme has an impact everywhere, the products, the training courses, exam content and now to our very own @TechNetUK #InnovateIT IT Camps.

For the last six months we have been running a series of events with hands-on labs and audience participation covering What’s new in Windows 10 Enterprise, What’s new in Windows Server 2016 and What’s new in Cloud Infrastructure based on Azure Resource Manager templates.

Well, out with the old and in with the new we have three new areas of content with right up to date material and labs to use. Please note you still need to bring your own device to use the labs.

This device only needs to have internet access and a browser. A keyboard is highly recommended as the iPad / Tablet without keyboard experience is hard to use.

A screen size in excess of 8” is also recommended , especially if your eyes are as old and tired as mine are.

The new areas we are covering between March and July are:


Empowering IT to protect and enable productivity

This one day event covers the following topics

1.Identity and Access Management

2.Increase productivity and reduce helpdesk costs with self-service and SSO experiences

3.Manage and control access to corporate resources

4.Mobile Device and Application Management

5.Information Protection


Increasing efficiency, and scalability with Windows Server 2016 and Microsoft Azure

This one day event covers the following topics

1.Datacenter Infrastructure

2.Azure Infrastructure


This one day event covers the following topics

1.Introducing the Microsoft Azure Stack

2.Visibility, Insights and Security Analytics with the Operations Management Suite

4.Cloud-Enabled Protection with the Operations Management Suite

5.Hybrid Automation with the Operations Management Suite

The days takes the form of a mixture of lab exercises, demonstrations, presentations and useful group chats about areas of technology that will be key to most if not all businesses over the next few months and years.

This obviously keeps us in the @TechNetUk office on our toes in a technical learning way, which is good. It also means we can come around the country and pass on the latest greatest advances in the Microsoft world to you all.

Register HERE to attend, the site is being updated to reflect these changes over the next few days so remember to come back if they are not yet available.

Come along and meet your friendly neighbourhood Technical Evangelists and Microsoft MVPs get learning the future of IT.

You can also keep up with the topics at the Channel9 portal and the Microsoft Virtual Academy

Finally you can also practice the skills we demonstrate in the #InnovateIT events by using the TechNet Virtual Labs