Learning to code: Week 4 – ASP.NET MVC view model basics

This week, I made limited progress learning to code.  I was out of town on a work project Sunday – Wednesday.  Since I got home, I’ve been slammed with catch up tasks.  However, I was able to read a few good resources while in the air or airports on MVC view models.  I wanted to link to them here for reference.

This next week, I hope to put these resources to good use for a potential enhancement to the MVC framework app Mr. Miyagi and I are working on.

More next week…

Learning to code: Week 3 – Scrum / ASP.NET Identity

This week, my learning to code progress was a bit scattered, but I did learn some great stuff.  Here are the highlights.

  1. I did a PluralSight class on Scrum (a widely used development methodology).  I previously had some exposure to Scrum through clients where I do infrastructure work and the developers practice scrum.  While this class did not help me better understand C# / MVC or anything code related, it did help me understand more about the business and process around creating software.  The stats and stories around software projects that go wrong (Google FBI Sentinel Project) were amazing.  I can already see some benefits of learning about Scrum.  More on that in a moment.
  2. I found a great site with connection strings while trying to move from a LocalDB database to SQL Express.
  3. I did a Microsoft Virtual Academy class on ASP.Net Authentication / Identity.  This was a great class.  Many thanks to Adam Tuliper (@AdamTuliper) and Jeremy Foster (@codefoster).  The work Microsoft has done building this is really great.  After all, nearly every app needs a way for users to login / reset passwords / have roles etc.  So, why should each developer re-invent that wheel — right?  Microsoft seems to have provided a lot of value here.Learning on my infrastructure and security background I did some quick research on how well this implementation does with password hashing.  Overall, the current version is reasonably solid, and can even be improved further.
  4. I demoed the app Mr. Miyagi and I have been working on to the client.  Again – mostly Mr. Miyagi (not his real name) working and mostly me watching on amazement.   The client was very pleased overall.  We’ve turned things over to them and they have started significant testing etc.During my demo, a new employee of the client we are building this for asked a great question that made me think back to the Scrum learning I did early in the week.  The application we created tracks some information on packages that come in to this client.During the spec process, we never considered adding an ability for the application to allow multiple similar packages to be received at the same time.  The new employee said something like “Hey -How do I add multiple similar packages at the same time?”.  I said “Uh, you can’t.  We did not think of that during our spec process.”  The client side guy we worked on during the spec process agreed.  The new person said “Man – that is going to take a long time when I get 30 of the same packages.”  He was exactly right.  So, I thought through this and said I did not think it would be hard to add.  I took some quick notes then came back to have a look at it.

    It looked simple enough, so I told the client I was sure it would only take a couple of hours to implement and test.  The client approved us tackling that project as a quick enhancement.

    I’m pleased to say I was able to quickly modify the view to take a quantity of packages integer, get that integer to pass to the controller and get the controller to loop through the package creation process over and over again all on my own.  Along the way I learned a few things.

    This is level 101 stuff here, so for any real developer this would not be anything to be excited about.  However, I was thrilled to have been able to quickly implement a seemingly small but valuable change for the client.  They were happy as well.  Total win – win.

    While doing all of this – I thought back to the scrum class.  If we had been doing this project more like Scrum suggests this small but useful feature might have come up earlier.  While realistically doing scrum on this project would have likely been overkill, this experience made me see the value of Scrum – especially with larger projects.

One more good week in the books!  Until next time…

Learning to code: Week 2 – Entity Framework

This week I spent some time learning more about EF (Entity Framework).   Entity Framework is an ORM (Object Relational Mapping) framework.  Mr. Miyagi has made significant use of this in the project we are working on together (mentioned in my previous posts).  Now I understand why better than I did earlier this week.  Essentially, Entity Framework removes the need to write lots of potentially tedious SQL statements to get data into and out of a relational database system.  EF is essentially a layer between the C# code and the SQL database.

The most helpful resource for learning about Entity Framework that I found this week was a PluralSight video series from Julie Lerman (@julielerman).  Julie is clearly an expert when it comes to Entity Framework.  Her video series got me to where I really understand how the C# code in the project with Mr. Miyagi is getting data into and out of our database.  That was a big help.  As an infrastructure guy, I like to understand exactly how something is working.  Julie’s videos really helped with that quite a bit.  I was able to see how C# using EF was turning requests in code into SQL queries.   So, overall it was a good week learning.

I’ve come to realize a few things this week.

#1 – Currently, the more I learn – the more I realize I need to learn about more things.  For example Entity Framework makes use of Linq Methods and Linq Query Syntax.  While I now have a good basic understanding of how Entity Framework works, I now also realize that in order for that to matter much, I need to become reasonably proficient at Linq as well.  I’m hopeful that soon this will settle down a bit and I can quit learning entirely new things and settle into learning about these things at a deeper level.  For now I am very much still in the “drinking from a fire hose mode”.

#2 – The body of knowledge required to be a productive developer is large.  Its increasingly clear that learning to code C# / MVC Framework (with all of its related parts / pieces) is going to continue to require some substantial work.  This is not something you tinker with for a couple of weeks and suddenly get great at.  I’m ok with this.  If I’m a reasonably productive developer by the end of 2016 – I’ll still be thrilled.

#3 – My existing background in I.T. infrastructure, while seemingly related has not been much help so far.  To an outsider, one might think “Ahh – David is good at infrastructure, I’m sure he will easily be able to pick up this code thing he is working on.”  The uninitiated might think the two are more closely related than they are.  Frankly, so far the large body of infrastructure knowledge I’ve built up through the years has been of very little help.  From what I can see so far, development is mostly a separate body of knowledge.

Previously, in my infrastructure work I’ve easily and quickly picked up what I will call complementary bodies of knowledge (virtualization / security / storage / cloud IaaS / etc).  These built on or complemented my existing infrastructure knowledge.  So, learning them by comparison was easier.  So far learning to write code has been different.  There simply is not nearly as much cross over. Infrastructure folks who want to learn to code – you have now been warned.

My hope is that in the future I will be able to leverage my infrastructure knowledge and my ability to write code in a way that takes advantage of both bodies of knowledge.

#4 – Higher developer productivity.  Mr. Miyagi said this to me before, but now I am starting to understand it at a different level.  The Microsoft .Net / Visual Studio world is designed with developer productivity as a high priority.  More and more I can see how that really is true.  EF alone removes what would otherwise be an awful lot of repetitive / boring work from the development workload.

Learning to code: Week 1

As I mentioned in my intro post I’ve decided to do a bit of a reset on my learning to code journey and focus on some of the basics.  To that end, early this week I found two great C# resources that have helped me make some real progress this week.

C# Basics

The first resource I used was: The C# Yellow Book written by Rob Miles (@robmiles). Apparently, this book is “used by the Department of Computer Science in the University of Hull as the basis of the First Year programming course.”

yellow-book-cover

This book is exactly what I needed.  I downloaded this on Monday and I have been consuming it all week.  It really helped cement my understanding of some of the important C# basics. Honestly, it also got into a few things that I’m currently still a bit fuzzy on.  I will probably try to get back into those sections again in the days ahead.

If you happen to be following along and you want to learn to code C#, I think this might be a great place for you to start.

MVC Basics

The second resource I found this week is this Intro to ASP.NET MVC video series from Microsoft Virtual Academy.  This series was recorded live by Jon Galloway (@jongalloway) and Christoperher Harrison (@geektrainer).  These guys walk through some of the basics on MVC framework.  I’m only about 50% finished with this, but so far it has been very helpful and taught me some of the MVC framework fundamentals.

So far they have mentioned three additional resources in particular that I’ll certainly be checking into further.

  • http://sidewaffle.com/
    Described by the site as: The SideWaffle extension adds a bunch of useful Snippets, Project- and Item Templates to Visual Studio. The purpose is to make your daily work in Visual Studio a richer and more productive experience.
  • http://vswebessentials.com/
    Described by the site as: Web Essentials extends Visual Studio with a lot of new features that web developers have been missing for many years.  If you ever write CSS, HTML, JavaScript, TypeScript, CoffeeScript or LESS, then you will find many useful features that make your life as a developer easier.
  • https://zencoding.codeplex.com/
    Described by the site as: Zen Coding is a Visual Studio plugin for a fast writing HTML (using CSS-like selector syntax) and CSS (using short versions of CSS properties).
    Web developers Sergey Chikuyonok and Vadim Makeev have built a set of plugins called ‘zen-coding’ that works across a range of IDE’s.

Summary:  This week, I made some good initial progress toward my goal of becoming proficient at C# / MVC framework development using Visual Studio.  I still have an enormously long way to go.  However, I feel like this week I actually started to build a small but sturdy foundation that I can build further on going forward.

Intro: Learning C# / MVC framework / Visual Studio

Each year I like to set some new goals for myself.  One of my goals for 2016 is to learn to code. I’d like to learn to be proficient in C# specifically with MVC framework.  I’ve decided to blog about my progress towards this goal.  My hope is that perhaps some of you will find some value in what I post along the way that might help you if you too are interested in learning to code.

For those of you who know me, you may know that learning to code is something I’ve been tinkering with on and off for a while.  From time to time I do some scripting (.bat files / PowerShell / Azure PowerShell etc) in my role as an I.T. Infrastructure Engineer.  A few years ago I played with Python for a bit…then I lost my mind and took a run at Objective-C for iOS. Last year, I decided I wanted to learn C#.

Initially, I took a look at some of the boot camp style programs that teach C#.  I actually live very near what appears to be a very good one.  However, I’m simply not at a place in life where I can dedicate myself fully to this.  So, the boot camp method is not currently a viable option for me. If it is for you, I’d suggest you strongly consider doing that.  I have no doubt that being immersed fully in learning would be a superior way to learn.  However, for me that just is not currently feasible.  So, my plan is to learn to code primarily through self paced resources such as books, videos etc.

Most recently (Q4 2015) I’ve been working on a C# MVC web app with a good friend of mine. For now, we will refer to him as Mr. Miyagi.  Mr. Miyagi agreed to take on a C# MVC web app project with me in order to help me learn.  I’m doing most of the project management / client communications / specs / qa etc and he is writing nearly all of the code.  He has been kind enough to let me watch and explain what he is doing while he codes.  This has been very helpful. He is a very experienced .Net developer, so just getting the chance to watch him work has taught me some good stuff for sure.  While watching him code much of what he is doing / explaining makes sense.  However, when I try to write code on my own,  I am currently as lost as a ball in tall weeds.  I’ve come to the conclusion that trying to operate at his level at my level of experience is essentially folly.

So, my 2016 goal to learn to code serves also as a bit of a reset for me.  I’m going to start from the ground up.  I’m going to learn the basics well and build on them until I can code proficiently at a higher level.

Currently, my plan is to post a weekly update on what I have learned and the resources I used to learn it.  If you are interested, feel free to follow along.

If you wake up and see IPs you support routing to China, it’s going be a rough day.

If you wake up and see IPs you support routing to China, it’s going to be a rough day.  Today – was a rough day.

  • At 4:35AM EDT my network monitoring system alarmed that a clients site-to-site VPN connection was down between the clients office in NC, and our data center in Atlanta, GA.
  • At ~ 6:15AM EDT I woke up and saw the alarm.  I immediately begin testing / collecting data.  It quickly became obvious that this was a routing issue.  Connectivity between some networks (Road Runner and several others) to our clients data center IPs was broken.  Curiously – traffic from Road Runner / Time Warner Cable was routing out to a router in Los Angeles, CA then dying.
  • In order to open trouble tickets for a routing issue, you need trace routes.  So I collected several showing networks that worked and ones that did not – in both directions.  Then I opened tickets with Road Runner / Time Warner Cable (the clients ISP) and the data center (who provides us IPs as part of a BGP mix of bandwidth they maintain and optimize).
  • After some additional troubleshooting while waiting to hear back on my trouble tickets, I noticed that a new BGP advertisement which included our IPs was published at nearly the exact same time that the site-to-site VPN failed.  I’ve sanitized the screen shot to protect the innocent (my client) and the guilty (a Chinese ISP).  The red blocks contain IP details I’ve intentionally removed.
    bgp_update
  • After some troubleshooting we were able to determine that a Chinese ISP had published a bogus BGP advertisement. The Chinese ISP wrongly advertising the a /20 block of IPs (which included some of ours).  They actually own a /20 that was one character different from the block they advertised.  It appears they simply made a typo somewhere and caused all of this.
  • Our data center NOC team reached out to the Chinese ISP NOC to see if they could get them to remove this wrong advertisement.
  • At 10:25AM EDT our monitoring system recorded the site-to-site VPN coming back online.
  • When I arrived at the client site (where I was scheduled to be today anyway) – I tested and the bogus BGP advertisement had been removed.

So – what is the take away from this?  What can be learned?  Here are a few things – several of which I knew intellectually previously and I know at more of a gut level now.

  • False BGP advertisements can create a real mess.  I knew this previously – but it never impacted me as harshly as it did today.  Want to read more on how bad this can be – check out the BGPMON blog here: http://www.bgpmon.net/blog/.
  • It seems some ISPs filter or manage BGP more carefully than others.  For example – Level 3 never seemed to be effected by this bogus BGP update.  Time Warner / Road Runner apparently accepted it nearly immediately.  I’m no BGP guru at all – but wow improvement is needed here.
  • In the future before I open a routing issue ticket, I’ll take a look not only at trace routes, but also at BGP advertisements.  Huge thanks to Hurricane Electric for a great looking glass tool that ultimately helped me get to the bottom of this.
My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

Background:  For the last two years, I’ve gone to the annual BlackHat USA conference in Las Vegas. I’ve loved it both years. The conference quality along with the presentation quality at BlackHat is fantastic. This year I decided to switch it up a bit and go to BSidesLV and do one day at BlackHat (business pass only) after BSides was over.

What is a BSides?  For a lot of good info on this you can go here. Basically, it is a community sponsored I.T. security conference.

What is an I.T. security capture the flag contest?  Essentially it’s a contest where you and your team defend I.T. systems under your control while attacking the systems of other teams.

Pre-Conference: BSides is completely free to attend (which is amazing). I chose to sign up as BSIdesLV sponsor a few weeks before the conference.   I chose the Rock level of sponsorship which was ~$100. I wish I could say I was a really great guy and I just wanted to help out, but the truth is I did this in order to get a reserved ticket. The DefCon (another I.T. security conference in Vegas) ticket line is legendary and I wanted to avoid anything remotely like that if at all possible. I also wanted to be sure I got in. I did not want to go all the way to Vegas only to have BSides run out of passes. So, I forked over my $100 and booked some travel.

While reviewing the BSidesLV web site, I noticed the ProsVsJoes capture the flag contest. I was intrigued and decided to sign up as a “Joe” since I don’t do infosec (Information Security) full time, and I had never participated in a CTF (capture the flag) before.   My plan was to get on a team, contribute where I could, have some fun and learn some stuff. Signing up for the CTF meant that I would miss essentially all of the other BSidesLV sessions. Its rumored that most of the sessions are recorded and posted online shortly after the conference. I hope that is true because there were some really cool looking sessions I would still like to see.

A few days after signing up as a Joe, @dichotomy1 (Dichotomy) dropped me an email and asked if I might be willing to serve as a Pro acting as a team captain for one of the blue (defense) teams. We went back and forth a bit about my credentials and experience. He mentioned that at that point he had three full teams, but if more folks signed up he might like to add a forth team and that team would need a captain. He mentioned most of the team captain duties were management / administrative / coordination in nature, so I agreed to do it if he needed the forth team.

A few days after that he emailed to let me know that our team was a go. He setup a mailing list for our team, I chose the team name Labyrinth Guardians and we were off to the races.

Over the next couple of days, each of our team members introduced themselves over the email list. I started and shared a Google Drive folder and Google Doc that became our team-planning document.   I encouraged our team to take a collaborative approach, and boy did they ever do that.

Everyone started to share ideas and questions in the Google Doc. It started to become clear that I had a group of guys who were really engaged. As an aside – there were female CTF participants; I just did not have any on my team. We all wanted not only to participate, but to win. I setup a group of smaller functional teams in the document, and asked the guys to pick a team to be a part of as their primary focus. We hoped this would help us all be able to get down to business faster in areas where each of us could bring our expertise and background to bear. While most of the guys on my team did not seem do infosec full time either, we had a good array of skills. So, our functional teams ended up being very well rounded.

We decided to schedule a team call to talk through our strategy etc.   As usual, with a group this size we could not find a time that worked for everyone, so we took a time that worked for most of us and ran with it. Initially, I tried to setup a Google Hangout on Air (so we could record the call for other team members). That ended up failing (likely because I did something wrong), so we quickly switched over to a Skype call. It was a messy 15 minutes trying to get the alternate call up, but the team hung in with me. Finally, 5 or 6 of us were on a call together.  We spoke for an hour or two during which time we got to know each other better and planned for another meeting early on the first day of the conference in Vegas. We also spun up a Slack account for our team to use for real time communication. I can’t say enough positive about this tool. It enabled very efficient real time communication that gave us an advantage.

A couple of the guys volunteered their rooms as a meeting spot, and we agreed to meet at 8AM local time in Vegas. The conference and the CTF started at 10AM. So, we had about an hour to get in sync and plan, then we all went over to the conference and got settled in.

Go time – Day 1:  At 10AM, it was supposed to be go time. However, the wireless network for the CTF was not cooperating.   The guys who run the BSidesLV network and Dichotomy were working hard to fix things. Eventually, we got to a state where Dichotomy was able to kick things off. My understanding is that next year they are planning to go wired – which makes lots of sense.

The scenario for the CTF is that we are essentially taking over a network that has previously been run by idiots. Dichotomy called it “horribly vulnerable”. Our job was to keep network services up and running, deal with user requests (tickets), and find flags in our environment.   We were to do all of this, while a red team of professional penetration testers was attacking us.

The game is scored by a proprietary program Dichotomy developed called “Score Bot”. Score Bot periodically measured our service uptime, how many tickets we had closed, how many flags we had submitted, and how many of our hosts had been compromised by the red teamers. When things kicked off, our guys went to work in their functional team areas. The windows team guys went to work on the windows boxes, the *nix team went to work on the *nix boxed etc.

We were doing fairly well midday on the first day. However, we were heavily focused on finding and submitting flags. We had found several but there was some significant ambiguity in how we were to submit these to score bot. Several of our guys banged away on this for a while. Eventually, they figured it out after expending some significant time. One of our team members noticed that flags did not count for very many points, but that closing tickets from fake users counted for a lot. So, we started to prioritize dealing with tickets higher.

At the end of day one, we had won. A screen shot of Score Bot is below. We had done a decent job of keeping attackers out of our boxes all while closing tickets like mad men. Team morale was high. We were excited to have won day one, but we all felt like we were also a bit lucky. Our expectation was that the work we did on day one securing boxes would pay off on day two. So, we all headed our separate ways and agreed to meet in the CTF area at 9AM on day two. CTF - DAY1 - FINAL Scores Day 2 – Attack time:  I think we were all looking forward to day two. On day two, the red team members break up and embed with the blue teams. Then, each blue team goes to work attacking each other. In general, the majority of our team seemed to have more experience on the defense side of things. So, day two was a great opportunity to learn from a pro red teamer. We got a great red teamer on our team. He quickly engaged and brought his experience to bear for our cause.

Before we started on day two, we also discovered that the entire environment was going to be reset to the state it was in the prior day before we started. So, all of our defense work from day one was effectively lost, and we had to do it again. In addition the scoring model was going to be changed. Tickets would count for fewer points, and flags would count for more points. Since we knew the environment we hit the ground running quickly. We got boxes locked down as much as possible while maintaining service uptime. We submitted flags like mad men. We also went to work scanning both our our own network as well as our adversaries networks for vulnerable systems. We used NMAP, Nessus and Open VAS for this work.

Once we identified vulnerable hosts on our network we mitigated the vulnerabilities as quickly as we could. However, we identified multiple RCE (remote code execution) vulnerabilities that we were never able to fix due to issues in the environment. After we had our stuff at least heading mostly in the right direction, we started kicking off scans of the competition. Once we identified vulnerable hosts on the other teams networks, we worked with our red team member to start attacking them. Fortunately, our work on the defense side of things along with a bit of luck paid off well. By mid afternoon, none of our boxes had been compromised yet. All of the other teams had boxes that had been burned, some thanks to our guys working with our red teamer. At one point Dichotomy was essentially asking the other team to focus on us and break into our boxes. Since we still had some unresolved vulnerable systems, eventually they did get in to a couple of our lower priority boxes. ComeAtMeBro copy We started to focus heavily on offense later in the afternoon. We identified the team that was closest to us in points (SaltyGoats – SG) and went after them with everything we had. We had identified some unresolved vulnerabilities in their environment, so our red team pro went to work. After about an hour of focused work, he had compromised their Active Director Domain Controller (AD DC), so we essentially owned their windows network. Our red teamer did some work so that Score Bot would know that a couple of our competitors boxes were owned, then we started to plan for what we would do once the “scorched earth” rules went into effect at 4PM. Prior to those rules going into effect, all we could do was signal Score Bot that we owned these boxes. We were not allowed to destroy them. However, once the scorched earth rules started, it was open season.

Because of the way the scoring engine worked (which was public knowledge from the beginning), a down domain controller would cause our competition to loose all availability points (because DNS in the environment would fail which all the rest of the scoring of their environment was dependent on). So, we started to talk about how to kill their AD box, which we had gained administrator level access to. In the end we opted to break the box in a way that would not allow it to boot (marked the boot partition inactive using diskpart), then simply rebooted it. After a couple of minutes their network went all red.  Mission accomplished. CTF - FINAL DAY2 Scores - 8PM EDT After that a few of our guys continued to focus on attacking, but at that point we had pretty well established a point lead that would be hard for others to catch up to in the time remaining. At 5PM Dichotomy had the red teamers come up and give a presentation on how they attacked us. We did not learn anything especially surprising, but it was a good overview and good education. In the end – we won. The official scores were: Screen Shot 2015-08-07 at 5.13.49 PM These scores differ from the screen shots shown because positive ticket scores did not count on day two (one of the rule changes). So, essentially our total score minus our positive ticket scores equaled our ending score.  Also – its doubtful I got my screen shot at the exact last moment.

Thanks!  I believe all of our guys had a good time. I know I sure did! With that in mind, I’d like to thank some folks who worked hard so that we could enjoy this experience.

  • The folks who got BSides started, and have grown it into an amazing movement. For some great back story, check out this podcast with @Jack_Daniel where he explains how this got started. While I’ve never had the pleasure of meeting Jack, I have seen him in the wild and he is an incredibly well spoken (not to mention well dressed) guy. He is an absolute gift to the infosec community.  Clearly one of the really good guys.
  • The entire team who makes BSidesLV happen. Organizing a conference this large is an enormous amount of work. Organizing a free community driven conference on a limited budget with only a group of volunteers and pulling it off as well as these folks do is an absolute work of art. You folks are amazing in every sense of the word.
  • The BSidesLV network team. You all showed tremendous grace under fire dealing with the CTF WiFi issues. Well done!
  • Dichotomy – Dichotomy put together a really great CTF event for us to participate in. The amount of work required to create this game environment must have been huge. Thanks man! You gave us all a great opportunity to learn, have fun, and grow. We appreciate it! I’ll forever remember this fondly as my first CTF.
  • My team (LabyrinthGuardinas)– You guys ROCKED! It was great to work with such a good group of guys who were willing to do whatever was needed to succeed. Your skills, patience, flexibility, creativity and generally awesome ideas are what allowed us to win. I hope you guys had a much fun as I did.
  • Our friendly competition: Lets be honest – any of us could have won this thing.  Our friendly competition is just as smart as we are. I spoke to one of the other team captains after we were done on day two, and he was just a great guy. I hope we get the opportunity to get to know you guys better in the years ahead.

What’s next?  I want to contribute to the BSides movement. This year, I basically enjoyed an awesome event because of the work of a lot of dedicated folks.  Now I feel like its time for me to get more involved and contribute.  I’m not sure exactly where that is yet – but I’m determined to help where I can.  What about you?

Related Posts:

If your backup systems fail silently, would you notice?

Backup Computer Key

Me: <while working on something else with a client> “So – how have your backups been doing recently?”
Client: “Umm…good…I think…?” insert <shy look> and <crickets chirping – loudly>

I cut my teeth in I.T. (as many of us did) on Backup Exec and tape drives.  Ugh – right?  Back in the day, it was common for clients to need to swap tapes each day.  Back in those days, I would often show clients how to check the status of the previous nights backup jobs when they swapped the tapes.  That process gave folks a physical cue to check the backup status and make sure that the tape they were taking offsite actually had good verified data on it.

Fast forward to now.  Many modern backup systems are highly automated.  Many of them require no physical attention at all because they write data to disk based storage that then gets automatically replicated offsite.  Many of them leverage email based reporting.  So, the old physical cue to check the backup status when you change the tape is simply gone. Don’t get me wrong.  Newer backups that leverage virtualization with technology like VMware CBT (Changed Block Tracking) and deduplication are dramatically better than the old tape based solutions.  However, they introduce a new challenge.  They are so good and so highly automated that they are easy for a busy I.T. pro to forget about.

What happens by default in many cases is that the person(s) who monitors the backup systems gets emails from the backup systems letting them know how the backup jobs are working.  Many organizations have multiple backup jobs configured that run at different times of the day.  These email based logs can get rather noisy and potentially confusing. So, what do some folks do?  Well, they create an email rule to move all those noisy backup alerts into a folder…which they check…sometimes. Hear the crickets again?

If the backup system fails, often the emails simply stop.  The system fails – silently.  In the worst case scenario, someone finds out that their backup solution failed silently and they don’t have recent backups just when they need to do a restore.  Yikes.  Now we are in potentially RGE (Resume Generating Event) territory for that person, and real potential trouble for the business they work for.

Before I go further, let me say that a lot of people do a great job monitoring their backup systems.  However, we are all human and we all make mistakes and occasionally miss things.  The goal of this post is not to give you hard working I.T. pros a hard time, but to help better protect you and the business you work for from the very real and very painful damage a data loss event can cause.

With that in mind, here are several potential solutions to this problem.

Low tech: Make a checklist, or do something that forces you to manually check the status of each of your backups each day. Yep – that’s right.  The I.T. guy just suggested a check list.  I’m even good with you filling it out on paper.  It is low tech – but it works.  Do something that changes your behavior and forces you to check your backups – all of your backups – each day.  That way when you go to do a restore, you can be confident your backup jobs have been running as planned.

A paper based check list does not seem like an awesome solution.  So, here is a potentially better way.  I’ve implemented this recently, and so far I am very happy with it.

High tech: Let your monitoring system keep track of your backup logs!  Recently, I found a great component (sensor really) in my favorite monitoring software package.  That monitoring software package is PRTG from Paessler.  I use this tool to monitor my own infrastructure, as well as infrastructure in several client environments.  PRTG is absolutely fantastic. I can’t say enough good things about it.  If you are not using it, I’d strongly suggest you check it out.  They have a free trial that you can get from here.

PRTG has an IMAP sensor, that you can configure to connect to an email account (over IMAP) so that it can essentially read your backup systems email reports for you, and alarm actively when something is not working correctly.  The PRTG folks have a great write-up on the entire config process here: http://www.paessler.com/manuals/prtg/monitoring_backups.

So, if you implement PRTG and properly configure this monitoring, PRTG will actively alert you if a backup system fails silently.  This alert will hopefully cause you to investigate and resolve the problem quickly.

Obviously, it is critical to properly configure this monitoring.  You need to be crystal clear on what you are monitoring for.  If you configure PRTG incorrectly, it could give you a false sense of security and make you think things are working well when in fact they are not.  So, if you decide to implement this solution I would suggest that you configure it first, then test it thoroughly to make sure that it properly alarms when a backup fails.  You can simulate this in a variety of ways.

If you need help implementing this, give me a call.  I do consulting for a living, and I’d be happy to help you implement this solution.  If you are an existing client and your environment is too small to justify a dedicated PRTG install, let me know.  If we discuss it and it is appropriate, I’ll work with you to potentially use my PRTG implementation to monitor your backups for you.

Replicating a test / lab environment hosted on Microsoft Azure with Azure PowerShell

In my last post, I covered how to stand up a small test / lab environment in Azure using Azure Power Shell.  If you have not seen that post, you can access it by clicking here.

In this post I want to cover another script I’ve created that automates the replication of a lab environment to a separate isolated new lab environment.  The replica will have its own affinity group, own virtual network, and its own dedicated storage account.

The client who I built the initial set of these scripts for desired to offer clients a trial of their enterprise software in a hosted Azure lab environment.  So, this script is designed to take what we call the “gold lab” and clone it to a new lab that would be handed over to a client for testing purposes.

Just like the previous script required, this script also requires that you go and manually create the virtual network in Azure first.  The pre-deployment notes in the script should explain this in sufficient detail.  Next, you need to choose the variables that the script will use to run with by editing the script.

  • $labName is the name you wish to use for the destination (new) lab.  As with the first script, this name will either become the name of the components, or be prepended to them.
  • $azureLocation is the Azure region where the destination (new) lab will be deployed.  I’ve not yet tested replication to a new region.  So, for my purposes this region has been the same region that the original “gold” lab was in.
  • $instanceSize is the instance size you wish for your destination (new) machines to be assigned when they are provisioned.
  • $sourceStorageAccount is the source storage account where the source “gold lab” vhds live.  If you used my previous script to create your gold lab, this will be the $labName you used in that script.
  • $souceVM1Disk and $sourceVM2Disk are the source VHDs that you want to copy.  Unfortunately, at this time you will need to manually enter these.  I’d like in improve this in the future.  You can get these by browsing your source storage container.
  • $sourceContainer is the source container where the VHDs live.  $destinationContainer is the destination container where the new VHDs will live.  Normally this will be set to “vhds” but I wanted to offer these as variables so that if you like to have your VHDs in another container, you could still easily use this script.

Once you have those variables set, your source VMs are powered off and you have manually created the destination virtual network, you should be ready to roll.

Oh, before you run the script… Please remember that I am in no way responsible for what this script might do. I believe it to be useful, and generally safe. You should make sure you are 100% comfortable with what this script is doing before you begin. Ok – now that the lawyers can sleep again tonight, here we go. When you run the script you should see output something like this.

lab-replica

You now have a duplicate of your initial gold lab running in an isolated environment on Azure.  In my case, this includes a Active Directory (AD) Domain Controller (DC) as well as some other software that generally does not respond well to being cloned.  However, because the script is making an exact copy of the disks and spinning the new VMs up in a new identically configured isolated lab (where it can not communicate with the source lab) things work fine for testing purposes.  Keeping the internal IP address assignments identical is key to making this work.  In my case, I’m going back in and editing my virtual network after creation to make sure that DHCP is pointed at the IP reserved for my AD DC.  This is obviously required for AD to behave.  This may or may not be required in your situation.

I hope this is helpful to you.  I am sure this script could be improved.  However, I also hope it is in good enough shape to be helpful to some of you.  If you see ways it could be made better, please let me know.  Keep an eye on my blog for my lab destruction script that I’ll be posting soon.  Until then, be sure to turn off / remove things you don’t want Azure to bill you for.

#region Notes
# Lab Cloner - V2 Built on 2/7/2015
# Built by David Winslow - wdw.org
#
# Azure PowerShell General Notes / Commands
#		[Console]::CursorSize = 25 (Makes the cursor blink so you don't go insane.)
#		Add-AzureAccount (Connects Azure PowerShell up to your Azure account)
# 
# Pre-Deployment Notes:
#    Run Add-AzureAccount (Connects Azure PowerShell up to your Azure account) before running this script.
#    The destination network for the clone must be built manually in Azure portal before running this script.  
#      The network name must match $labName below.
#      Address space: 10.20.0.0 /16
#      Subnet-1 10.20.15.0 /24
#      Subnet name must be Subnet-1)
# You need to choose values for the variables below.
#endregion

#region Pre-Deployment Variables
$labName = "lab104"
# This name will be user entirely for, or prepended to most components.
# Must be all lower case letters and numbers only.  
# Must be Azure globally unique.
# Must not exceed 11 characters in length.

$azurelocation = "East US 2"
# This is the location that resources will be created in.

$instanceSize = "Standard_D2"
# This determines what size instances you create for the destination VMs launch.

$sourceStorageAccount = "lab102"
# This sets the source storage account.

$sourceVM1Disk = "lab102vm1-lab102vm1-2015-2-13-10-55-45-799-0.vhd"
# This sets the 1st source disk.

$sourceVM2Disk = "lab102vm2-lab102vm2-2015-2-13-10-53-38-795-0.vhd"
# This sets the 2nd source disk.

$sourceContainer = "vhds"
# This is the container the source disks are in.

$destinationStorageAccount = $labName
# This sets the destination storage account to the $labName

$destinationContainer = "vhds"
# This is the container the destination disks will be placed in.
#endregion


#region Write Pre-Deployment Variables to screen
Write-Output "============================"
Write-Output "Cloning Lab."
Write-Output "============================"
Write-Output " " 
Write-Output "Source storage account set to:" $sourceStorageAccount
Write-Output " " 
Write-Output "Souce VM Disk 1 set to:" $sourceVM1Disk
Write-Output " " 
Write-Output "Souce VM Disk 2 set to:" $sourceVM2Disk
Write-Output " " 
Write-Output "Souce container set to:" $sourceContainer
Write-Output " " 
Write-Output "Destination storage account $labName will be created."
Write-Output " " 
Write-Output "Destination container $destinationContainer will be created."
Write-Output " " 
Write-Output "Destination instance size set to:" $instanceSize
Write-Output " " 
Write-Output "Destination lab name set to:" $labName
Write-Output " " 
Write-Output "Destination Azure lab location set to:" $azurelocation
Write-Output " " 
#endregion

#region Assigning Azure Subscription
Write-Output " "
Write-Output "=================================="
Write-Output "Assigning Azure Subscription"
Write-Output "=================================="
Select-AzureSubscription "Pay-As-You-Go"
#endregion

#region Provisioning Affinity Group
Write-Output " "
Write-Output "=================================="
Write-Output "Provisioning Affinity Group"
Write-Output "=================================="
New-AzureAffinityGroup -Name $labName -Location $azureLocation
#endregion

#region Provisioning Destination Storage Account
Write-Output " "
Write-Output "=================================="
Write-Output "Provisioning Destination Storage Account"
Write-Output "=================================="
New-AzureStorageAccount $labName -AffinityGroup $labName
#endregion

#region VHD Blob Copy
Write-Output " " 
Write-Output "============================"
Write-Output "Starting VHD Blob Copy"
Write-Output "============================"
# Get Source Keys
$sourceStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccount).Primary
$destinationStorageAccountKey = (Get-AzureStorageKey -StorageAccountName $destinationStorageAccount).Primary
$sourceContext = New-AzureStorageContext –StorageAccountName $sourceStorageAccount -StorageAccountKey $sourceStorageAccountKey
$destinationContext = New-AzureStorageContext –StorageAccountName $destinationStorageAccount -StorageAccountKey $destinationStorageAccountKey

#create the destination container
New-AzureStorageContainer -Name $destinationContainer -Context $destinationContext

# VM1Disk
$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainer `
                        -DestContext $destinationContext `
                        -SrcBlob $sourceVM1Disk `
                        -Context $sourceContext `
                        -SrcContainer $sourceContainer

# VM2Disk
$blobCopy = Start-AzureStorageBlobCopy -DestContainer $destinationContainer `
                        -DestContext $destinationContext `
                        -SrcBlob $sourceVM2Disk `
                        -Context $sourceContext `
                        -SrcContainer $sourceContainer
                        #endregion

#region Disk Additions
Write-Output " " 
Write-Output "============================"
Write-Output "Starting Disk Additions"
Write-Output "============================"
# Add disks from the just cloned blobs
$VM1disk = $labName + "VM1.vhd"
$VM2disk = $labName + "VM2.vhd"
$VM1diskLocation = "https://" + $labName + ".blob.core.windows.net/vhds/" + $sourceVM1Disk
$VM2diskLocation = "https://" + $labName + ".blob.core.windows.net/vhds/" + $sourceVM2Disk
Write-Output $VM1disk
Write-Output $VM2disk
Write-Output $VM1diskLocation
Write-Output $VM2diskLocation
Add-AzureDisk -DiskName $VM1disk -OS Windows -MediaLocation $VM1diskLocation -Verbose
Add-AzureDisk -DiskName $VM2disk -OS Windows -MediaLocation $VM2diskLocation -Verbose
#endregion

#region Assigning Azure Subscription
Write-Output " "
Write-Output "=================================="
Write-Output "Assigning Destination Azure Subscription and Storage Account"
Write-Output "=================================="
Set-AzureSubscription -SubscriptionName "Pay-As-You-Go" -CurrentStorageAccount $labName
#endregion

#region Provisioning Reserved IP Addresses
Write-Output " "
Write-Output "=================================="
Write-Output "Provisioning Reserved IP Addresses"
Write-Output "=================================="
$VM1IP = $labName + "VM1IP"
$VM2IP = $labName + "VM2IP"
New-AzureReservedIP -ReservedIPName $VM1IP -Label $VM1IP -Location $azurelocation
New-AzureReservedIP -ReservedIPName $VM2IP -Label $VM2IP -Location $azurelocation
#endregion


#region Starting VM Creation
Write-Output " " 
Write-Output "============================"
Write-Output "Starting VM Creation"
Write-Output "============================"
$VM1 = $labName + "VM1"
$VM2 = $labName + "VM2"
New-AzureVMConfig -Name $VM1 -InstanceSize $instanceSize -DiskName $VM1disk | Set-AzureSubnet 'Subnet-1' | Set-AzureStaticVNetIP -IPAddress 10.20.15.5 | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol "tcp" -PublicPort 33899 -LocalPort 3389 | Add-AzureEndpoint -Name "PowerShell" -Protocol "tcp" -PublicPort 60208 -LocalPort 5986 | New-AzureVM -ServiceName $VM1 -AffinityGroup $labName -ReservedIPName $VM1IP -VNetName $labName 
New-AzureVMConfig -Name $VM2 -InstanceSize $instanceSize -DiskName $VM2disk | Set-AzureSubnet 'Subnet-1' | Set-AzureStaticVNetIP -IPAddress 10.20.15.6 | Add-AzureEndpoint -Name "RemoteDesktop" -Protocol "tcp" -PublicPort 33900 -LocalPort 3389 | Add-AzureEndpoint -Name "PowerShell" -Protocol "tcp" -PublicPort 60209 -LocalPort 5986 | New-AzureVM -ServiceName $VM2 -AffinityGroup $labName -ReservedIPName $VM2IP -VNetName $labName
#endregion

Automating the creation of a test / lab environment hosted on Microsoft Azure with Azure PowerShell

A couple of weeks ago a client who is a Microsoft Gold ISV partner approached me about building out a test lab environment in Microsoft Azure.  For those of you who may not know, Microsoft Azure is Microsoft’s public cloud offering (IaaS / PaaS / etc).

Ultimately, my client wanted an environment that they could prepare initially, then replicate.  The clients goal is ultimately to be able to provide some of their potential clients with a live trial of the enterprise software they create.  So, a big public cloud provider like Azure was a perfect fit.  Obviously, you could also use AWS / Google Cloud or any one of a number of big IaaS providers for this purpose.  However, since this client is a Microsoft Partner, and mostly a Microsoft shop overall, Azure was the direction they wanted to go.

Since the client ultimately wanted to automate the replication of these environments, I dove in to find out what the best way would be to do this.  It did not take long to figure out that Microsoft Azure PowerShell would offer the exact functionality I needed. So, I decided to dive in and learn a bit while doing this.

I decided that if I was ultimately going to replicate these environments using PowerShell, that I should go ahead and build the initial “gold” environment using PowerShell as well.  The result of that effort is the script below.

In the next few days (once I have the bugs worked out) I’ll be posting scripts that can be used to replicate a lab, as well as destroy a lab once you are finished.  If I have time, I’m also going to create some simple scripts that can be used to power a lab down and power it back up as well.

If you ever wanted to build a lab setup quickly in PowerShell perhaps this will help you too.  It’s unlikely my script is perfect for your project, but perhaps it will serve as a starting point for something you can customize to fit your needs.  Here are a few things to mention before you get started.

  1. Azure costs money.  So, if you are going to test this be sure you know what you are doing and you know how to turn things off / remove them when you are finished.
  2. You can sign up for a free $200 trial which is a great way to start.
  3. You will need Azure PowerShell which you can download here.
  4. Once you have Azure PowerShell downloaded and installed you will need to connect it to your Azure account.  To do that simply run:
     Add-AzureAccount
    

    This will pop up a box that will ask you to authenticate to your Azure account.

  5. If this is your first time using Azure PowerShell you may also need to run the command below to correctly target your Azure PowerShell session at the correct subscription.:
    Select-AzureSubscription -SubscriptionName &amp;quot;Pay-As-You-Go&amp;quot;
    
  6. My script relies on you building out a virtual network in Azure for these VMs to live in manually first.  What is needed should be clear from the Pre-Deployment Notes section of my script.  At this time, Microsoft does not seem to offer a great way to automate the creation of this virtual network (if they do I am missing it).

Once you have all that done, you should be ready to run the script.  The sample script below will build two Azure VMs.  You can choose the following variables in the Pre-Deployment Variables section of the script before running it.  Those variables are:

  • $labName – This is essentially the unique ID of this lab.  That way when you use this tool over and over again to create hundreds of labs that are infinitely useful to you and your organization, you will be able to tell all of the components  apart.  Sorry – I got a bit carried away there.  This $labName will either be the name that is used for components (Affinity Group / Storage Account etc) or be prepended to the name of components (reserved IPs, VMs etc).
  • $azureLocation – This is the Azure region that everything this script creates will be deployed in.  You can find a list of Azure Regions here.  It is important that the location you choose match the location of your virtual network.
  • $instanceSize – This is the instance size of the VMs that you will deploy.  You can find a list of Azure Instance sizes here.
  • $baseImage – This is the base operating system image your VMs will start with.  To get a list of potential images to start with you can run this command from your Azure PowerShell window.  Be sure and filter for what you are looking for by replacing the “Windows Server 2012 R2 Datacenter*” in the example below.
    Get-AzureVMImage | where-object { $_.Label -like &amp;quot;Windows Server 2012 R2 Datacenter*&amp;quot; }
    
  • $adminUser – This is the guest VM administrator username for the VMs that you will deploy.
  • $adminPassword – This is the password for the guest VM $adminUser username you chose above.

Once you have those chosen variables, you should be ready to roll.  Oh, before you run the script… Please remember that I am in no way responsible for what this script might do.  I believe it to be useful, and generally safe.  You should make sure you are 100% comfortable with what this script is doing before you begin.  Ok – now that the lawyers can sleep again at night, here we go.  When you run the script you should see output something like this.

lab-builder-output

Here is what is going on in the background.

  • An Azure Affinity Group named with your $labName variable is being created.  Affinity groups are critically important. When I started this process, I did not understand that.  I was getting terrible network performance between my VMs even though all my VMs were in the same region.  Basically, the affinity group makes sure that all of the other resources are located as close together as possible from a storage / network standpoint.  You can read more about affinity groups here.
  • An Azure Storage Account named with your $labName variable is being created.  This is where all of your VMs disks will be stored.
  • Two Azure reserved IP addresses are reserved.  Reserved IP addresses are IP addresses that will stay with a VM even if it is powered off / deprovisioned.  You can read more about them here.  There are costs associated with these which you can better understand by reading more about that topic here.
  • Two Azure VMs are built out in your affinity group, based on the $instanceSize and $baseImage you selected in the script.  These VMs have your $labName pre-pended to them.  In addition, the admin username and password that you chose is configured on these VMs.  Be sure and change that password after the VMs are deployed.  Having your admin password laying around in a plain text PowerShell script is a terrible idea!

In summary, I am sure this script could be improved.  However, I also hope it is in good enough shape to be helpful to some of you.  If you see ways it could be made better, please let me know.  Keep an eye on my blog for my lab replication / lab destruction scripts that I’ll be posting soon.

#region Notes
# Lab Builder - V2 Built on 2/9/2015
# Built by David Winslow - wdw.org
#
# Azure PowerShell General Notes / Commands
#		[Console]::CursorSize = 25 (Makes the cursor blink so you don't go insane.)
#		Add-AzureAccount (Connects Azure PowerShell up to your Azure account.)
#
# V2 improvements:
#    -Proper affinity group provisioning
#    -Added variable for regions
#    -Added variable for base images
#    -Added variable for instance sizes
#
# Pre-Deployment Notes:
#    Run Add-AzureAccount (Connects Azure PowerShell up to your Azure account) before running this script.
#    The virtual network must be built manually in the Azure portal before running this script.  I hope to improve this in a future version.
#		-The virtual network name must match the $labName variable below.
#       -The virtual network location must match the $azureLocation variable below.
#		 -Address space: 10.20.0.0 /16
#		 -Subnet-1 10.20.15.0 /24
#		 -Subnet name must be Subnet-1
# You need to choose values for the variables below.
#endregion


#region Pre-Deployment Variables
$labName = &amp;quot;lab51&amp;quot;
# This name will be user entirely for, or prepended to most components.
# Must be all lower case letters and numbers only.  
# Must be Azure globally unique.
# Must not exceed 11 characters in length.
$azureLocation = &amp;quot;East US 2&amp;quot;
# This is the location that resources will be created in.
$instanceSize = &amp;quot;Standard_D2&amp;quot;
# This determines what size instances you launch.
$baseImage = &amp;quot;a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd&amp;quot;
# This determines what image these VMs are built from.
# &amp;quot;a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd&amp;quot; - Windows 2012 R2 - Data Center - December 2015
# reference this to get image names if needed. (Get-AzureVMImage | where-object { $_.Label -like &amp;quot;Windows Server 2012 R2 Datacenter*&amp;quot; } )
$adminUser = &amp;quot;labadmin&amp;quot;
# This is the initial default admin username for each VM.
$adminPassword = &amp;quot;@@labAdminPassword2015!!&amp;quot;
# This is the initial $adminUser account password for each VM.
#endregion


#region Write Pre-Deployment Variables to screen
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Building Lab.&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Lab name set to:&amp;quot; $labName
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Azure location set to:&amp;quot; $azureLocation
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Instance size set to:&amp;quot; $instanceSize
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Base Image set to:&amp;quot; $baseImage
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Admin username set to:&amp;quot; $adminUser
Write-Output &amp;quot; &amp;quot; 
Write-Output &amp;quot;Admin password set to:&amp;quot; $adminPassword
Write-Output &amp;quot; &amp;quot; 
#endregion


#region Provisioning Affinity Group
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Provisioning Affinity Group&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
New-AzureAffinityGroup -Name $labName -Location $azureLocation
#endregion


#region Provisioning Storage Account
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Provisioning Storage Account&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
New-AzureStorageAccount $labName -AffinityGroup $labName
#endregion


#region Assigning Azure Subscription
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Assigning Azure Subscription&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Set-AzureSubscription -SubscriptionName &amp;quot;Pay-As-You-Go&amp;quot; -CurrentStorageAccount $labName
#endregion


#region Provisioning Reserved IP Addresses
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Provisioning Reserved IP Addresses&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
$vm1IP = $labName + &amp;quot;vm1IP&amp;quot;
$vm2IP = $labName + &amp;quot;vm2IP&amp;quot;
New-AzureReservedIP -ReservedIPName $vm1IP -Label $vm1IP -Location $azureLocation
New-AzureReservedIP -ReservedIPName $vm2IP -Label $vm2IP -Location $azureLocation
#endregion


#region Building VMs
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Building VMs&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
$vm1vm = $labName + &amp;quot;vm1&amp;quot;
$vm2vm = $labName + &amp;quot;vm2&amp;quot;
New-AzureVMConfig -Name $vm1vm -InstanceSize $instanceSize -ImageName $baseImage | Add-AzureProvisioningConfig -Windows -AdminUsername $adminUser -Password $adminPassword | Set-AzureSubnet 'Subnet-1' | Set-AzureStaticVNetIP -IPAddress 10.20.15.5 | New-AzureVM -AffinityGroup $labName -ServiceName $vm1vm -ReservedIPName $vm1IP -VNetName $labName -Location $azureLocation
New-AzureVMConfig -Name $vm2vm -InstanceSize $instanceSize -ImageName $baseImage | Add-AzureProvisioningConfig -Windows -AdminUsername $adminUser -Password $adminPassword | Set-AzureSubnet 'Subnet-1' | Set-AzureStaticVNetIP -IPAddress 10.20.15.6 | New-AzureVM -AffinityGroup $labName -ServiceName $vm2vm -ReservedIPName $vm2IP -VNetName $labName -Location $azureLocation
#endregion


#region Building VMs
Write-Output &amp;quot; &amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
Write-Output &amp;quot;Lab Builder Script Complete!&amp;quot;
Write-Output &amp;quot;==================================&amp;quot;
#endregion