Automating Google Cloud Platform Snapshots with PowerShell

As part of a project I am working on, I needed a way to automate disk snapshot creation and retention on Google Cloud Platform.  I found several good examples of how to do this in BASH scripts etc.  However, I was unable to find anything in native PowerShell that I liked.  So, I wrote some scripts and decided to publish them here and on GitHub in the hope of helping someone else.

  • create-snaps.ps1 automates snapshot creation.
  • remove-snaps.ps1 automates snapshot cleanup based on the age of the snapshot.

Obviously, you will need to schedule these to run at some interval.  I’ve used Windows task scheduler for that purpose, and it seems to work great.

Obvious Warning: Please use these scripts at your own risk. I accept no responsibility for your use of them. However, if you run into any issues please let me know so I can work to improve them.

create-snaps.ps1

# Set the path and file name for PowerShell transcripts (logs) to be written to.
$LogPath = "c:\logs\powershell\snaps\"
$LogFile = Get-Date -Format FileDateTimeUniversal
$TranscriptFileName = $LogPath + $LogFile +".txt"

# Start the transcript.
Start-Transcript -Path $TranscriptFileName

#Set the GCP project.
$Project = "put-your-gcp-project-here-12345"

#Set the zone(s) where the disks are that you would like to take snapshots of.
$Zones = "us-east1-d", "us-central1-c"

#Record the date that the snapshots started.
$StartTime = Get-Date

#Go snapshot all of the disks in the zones identified above.
foreach ($Zone in $Zones) {
$DisksInZone = Get-GceDisk -Project $Project -zone $Zone | foreach { $_.Name }

foreach ($Disk in $DisksInZone) {
Write-Host "=========================================="
Write-Host "$Zone "-" $Disk"
Write-Host "=========================================="
Add-GceSnapshot -project $Project -zone $Zone $Disk #In the future we could clean this output up a bit.
}
}

#Record the date that the snapshots ended.
$EndTime = Get-Date

#Print out the start and end times.
Write-Host "=========================================="
Write-Host "Started at:" $StartTime
Write-Host "Ended at:" $EndTime
Write-Host "=========================================="

#Stope the transcript.
Stop-Transcript

#Send the PowerShell transcript (log) by email. You can delete this entire section if you don't want log copies delivered by email.
#Google Cloud Platform blocks direct outbound mail on port 25. Reference: https://cloud.google.com/compute/docs/tutorials/sending-mail/

#Mail Server Settings
$smtpServer = "mail.yourdomainname.com"
$smtpPort = "2525" #Don't put 25 here it will not work. See link above.

$att = new-object Net.Mail.Attachment($TranscriptFileName)
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer, $smtpPort)

# Set the email from / to / subject / body / etc here:
$msg.From = "gcpsnapshots@yourdomainname.com"
$msg.To.Add("you@yourdomainname.com")
$msg.Subject = "GCP Snapshot Report"
$msg.Body = "Please see the attached PowerShell transcript."

# Attach the log and ship it.
$msg.Attachments.Add($att)
$smtp.Send($msg)
$att.Dispose()

remove-snaps.ps1

# Set the path and file name for PowerShell transcripts (logs) to be written to.
$LogPath = "c:\logs\powershell\snaps\"
$LogFile = Get-Date -Format FileDateTimeUniversal
$TranscriptFileName = $LogPath + $LogFile +".txt"

# Start the transcript.
Start-Transcript -Path $TranscriptFileName

#Set the project.
$Project = "put-your-gcp-project-here-12345"

#Record the date / time that the snapshot cleanup started.
$StartTime = Get-Date

#Choose what snaps to remove. Essentially, the script takes the current date / time subtracts 30 days and sets a variable ($deletable). Is delatable even a word? Anyway... Any snaps older than that variable get removed. Obviously, you could tweak this number of days to fit your needs.
$deleteable = (Get-Date).AddDays(-30)

#Log what date and time we set $deleteable to.
Write-Host "Deleting snapshots older than:" $deleteable

#Delete the actual snaps.
$snapshots = Get-GceSnapshot
foreach ($snapshot in $snapshots) {
$snapshotdate = get-date $snapshot.CreationTimestamp
if ($snapshotdate -lt $deleteable) {
Write-Host Removing snapshot: $snapshot.Name
Remove-GceSnapshot $snapshot.Name
}
}

#Record the date / time that the snapshot cleanup ended.
$EndTime = Get-Date

#Print out the start and end times.
Write-Host "=========================================="
Write-Host "Started at:" $StartTime
Write-Host "Ended at:" $EndTime
Write-Host "=========================================="

#Stope the transcript.
Stop-Transcript

#Send the PowerShell transcript (log) by email. You can delete this entire section if you don't want log copies delivered by email.
#Google Cloud Platform blocks direct outbound mail on port 25. Reference: https://cloud.google.com/compute/docs/tutorials/sending-mail/

#Mail Server Settings
$smtpServer = "mail.yourdomainname.com"
$smtpPort = "2525" #Don't put 25 here - it will not work. See link above.

$att = new-object Net.Mail.Attachment($TranscriptFileName)
$msg = new-object Net.Mail.MailMessage
$smtp = new-object Net.Mail.SmtpClient($smtpServer, $smtpPort)

# Set the email from / to / subject / body / etc here:
$msg.From = "gcpsnapshots@yourdomainname.com"
$msg.To.Add("you@yourdomainname.com")
$msg.Subject = "GCP Snapshot Cleanup Report"
$msg.Body = "Please see the attached PowerShell transcript."

# Attach the log and ship it.
$msg.Attachments.Add($att)
$smtp.Send($msg)
$att.Dispose()
<pre>

Do what you love.

Disclaimer: I have no idea if this post will ever be helpful at all to anyone else.  Sometimes writing something like this helps me make sense of my own thoughts.  Since I wrote it down, I thought I would share it.

A commonly repeated bit of wisdom is that it is a wise career choice to “Do what you love.”  I’ve heard this advice for years and at first glance, it seems good.  However, as you really dig into it, it can be difficult to figure out what it is that you really love.  What would it look like to actually do what you love for a living?

Is this what I really love?

I love to go to my Aunt’s lake house and ride the SeaDoo around the lake.  It’s one of my favorite leisure activities.  Should I do that for a living?  Is that what I love?  Thoughts like this go through my brain…  Perhaps I can make a living riding a SeaDoo.  Perhaps I could become a SeaDoo racer.  Do what you love – right!  YOLO!

Hang on a moment Mr. YOLO man.  Let’s pause and count the potential cost of doing that.  I might need to move away from my extended family to some warmer climate where you could do this year-round.  I might need to exercise a lot to get in great physical shape to race competitively.  It’s nearly impossible I would be able to earn much of a living doing this in the beginning, so for several years I would need to train to be a SeaDoo racer and work at another job to support my family.  My free time and a lot of my family time would be totally consumed by this.  Hmm.  This is sounding less awesome very quickly.

Once you look at what you think you love in the bright light of reality, the picture changes a bit.  I do enjoy riding the SeaDoo.  However, I don’t love it nearly enough to make all of the sacrifices I would need to make in order to make an actual career out of it.

Now what?  Give up?  Head back to the proverbial salt mine to spend the rest of my days doing something I really don’t love or perhaps even hate?  Nope.  Dig deeper. Here are a couple of things I have observed recently that have made me think differently about this topic.

Observation #1 – Olympic Swimmers

Recently we watched some of the 2016 Summer Olympics.  I was amazed watching the swimmers.  Think about what they did to even make it into that Olympic pool.  They exercised like crazy.  They ate healthy – probably extremely healthy.  The practiced over and over again, nearly perfectly, day after day for years upon years.  They sacrificed a lot of big things in their life only to make it into the Olympics.

Watching them, I sat and thought at first, “Wow.  That is cool. They sure are fast!”  Then I thought a bit deeper about what being that fast must have required and I thought: “You know what – these people are crazy!  Why spend so much of your life for so many years on end to become THAT good of a swimmer?  Who cares!”

I for one frankly do not care nearly enough about swimming to do that.  I would not be willing to invest even a small fraction of the effort that the person who came in dead last must have invested, even if you could assure me that by doing so I could be an Olympic gold medal swimmer.  It’s simply not something I care that much about.  Those swimmers must really love something about swimming.  They have paid a tremendous price to get to this point.  I don’t think it is a price anyone would pay, if they did not love it.

Observation #2 – Wonky I.T. Security Topics

Last week, somehow I came across something that peaked my interest in an I.T. security book called the Art of Memory Forensics.  So, I paid ~$50 and ordered it from Amazon.  It came in on Friday and I proceeded to give it and the great tool that it is written about a ton of my weekend free time.

Why?  Because memory forensics is awesome!  Well – at least I think it’s awesome.  It’s a tool that can help me do something I actually love even better.  It open up another possible angle to attack the problems that I wake up thinking about from.  It is a small piece of a grander puzzle that has had me fascinated for years.

If most of you were to read this same 858 page book, you would hate it.  You would be bored to death.  You would not be willing to invest a fraction of the time that I will happily invest on this topic.  You would probably not do it even if you could become one of the best memory forensics people in the world.  You would nearly die of boredom or confusion or perhaps both during the first few hours.  Why?  Because most of you don’t care at all about this topic.  You don’t care so much that I bet your brain would nearly refuse to focus on this for long enough to really learn much about it.  You don’t care about memory forensics in the same way I don’t care about swimming fast.

What I have learned

Why do I love topics related to Computer Security?  Honestly, I have no idea.  I just do.  Perhaps it is just what God put me on this Earth to do.

Looking back over my life, I can see a clear interest in this topic all the way back to when I was a kid.  I remember one year, my family was at the beach and somehow I ended up reading a bunch of Tom Clancy books.  Perhaps they happened to be at the house we rented for the week.  Spy stuff, military stuff, tapping undersea cables to gather intelligence on the bad guys – all of it seemed so fascinating!  I did not get nearly the amount of sun that my brothers and cousins did that year.

After hearing I was a Tom Clancy fan, my High School Principal pointed me to a book by a guy named James Bamford titled The Puzzle Palace.  I had never heard of the Nation Security Agency before this book, but I read every word.  Again – totally fascinating.

Fast forward my life story a bit and I ended up in an I.T. career.  As my career has progressed, I’ve always gravitated to areas that fit within the broad categories of Information Assurance / Information Security.  I love configuring firewalls.  Seeing an IPS alert on a blocked attack is an actual thrill for me.  I love well planned and well configured backup systems.  Quickly restoring data that was destroyed by a crypto ransomware attack knowing that the capability to do that mean the criminal will not get a dime of my client’s money makes me happy on the inside.  I imagine I love these things as much as those crazy swimmers love swimming.  It’s just IN me.  It’s what I actually love doing.

Could I do something else?  Sure, but I might not enjoy it enough to get really good at it.  For me, this is the area that is so fascinating that I will willingly invest my free time and personal money to learn even more.  For me this is a marker.  It’s a hint.  It’s an indicator.  It’s a pointer that points to what I must really love.

My $.02 worth of advice for you if you are wondering about what you really love.

Sometimes people struggle to figure out what they love.  If you can’t figure out what it is that you love, look at your life and ask yourself this question:  What is it that I am so fascinated with that I will happily spend my free time and my own money learning more about?  For some of you it is music.  For others, it’s real estate.  Perhaps for some of you it is cooking.  For others it’s helping hurting people put their lives back together.

Look for patterns.  If you’ve been fascinated with something for years and you’ve spent your own money and your own free time to learn more about it and/or do more of it – pay attention.  That might be your thing.

If you think you might want to make a career out of it, pause and run it through the sacrifice filter first.  Ask yourself “Do I really love this enough to sacrifice what would be required to become good enough at this to make a living out of it?”  Be warned – the sacrifice required will probably be even higher than you expect upfront.

If you are not willing to sacrifice enough to make a career out of something, that’s ok.  Perhaps whatever it is can still be a great long term interest for you.  For me it has been helpful to have thought of some things and then intentionally set them aside as career options.  Doing this frees you up just to enjoy them as interests without getting stuck thinking about a career move that you know deep down you are not willing to actually make.  Set them aside and over time move on to the next thing you identify.  Rinse and repeat.  Eventually, you might hit on the thing you really enjoy where the cost to actually do it fits with what you are willing to sacrifice.

Back to me for a moment…

Do I do exactly what I love 100% of my work time.  Nope.  I honestly doubt anyone does. However, I get to do enough of it that the sacrifices are worth it.

If I’m lucky, I’ll make it to my Aunt’s place this weekend to ride around on the SeaDoo.  However, when I get home and clean up, I’ll probably be thinking about some  I.T. security related topic while I’m in the shower.

If anyone reads this far – I’ll be amazed.  If you do, I sure hope you can find and do work you love as a career too.  If you want to read more on this topic, here are a few links to folks who have shaped my thinking on this.

Windows 10 – Unintentional Upgrades

In the last week or two I have gotten a significant number of calls from clients who have had PCs unintentionally upgraded to Windows 10.  While I generally like Windows 10, I do not believe that Microsoft should be doing what they are doing and essentially upgrading people automatically by using deceptive practices.  So far, I have not seen this happen on machines that were joined to a Windows Active Directory domain.  However, I have now seen it 10+ times on machines not joined to a domain.  Here is what I think you should know.
Microsoft is misbehaving:
If you receive the free Windows 10 upgrade notification and click the X, rather than simply closing the upgrade offer app, Microsoft considers this your acceptance of the upgrade and scheduled the upgrade.  This is absurd and inexcusable.  Here is an in depth story about this.
How to prevent an upgrade from happening if you wish to stay on Windows 7 / 8 / 8.1 etc:
Simply download and run the Never10 app.  A good description of how to do this can be found here:
How to roll your PC back to Windows 7 / 8 / 8.1 if you have been unintentionally upgraded to Windows 10 against your wishes.
Fortunately, this is very simple and so far seems very reliable.  Here are directions directly from Microsoft.
I hope this is helpful to you.

Google Cloud Platform (GCP) vs Amazon Web Services (AWS) vs Microsoft Azure – Cloud IaaS – Price Comparison

I’ve decided to share some public cloud Infrastructure as a Service (IaaS) compute instance cost analysis that I recently created as part of a project for one of my clients.  When choosing an IaaS provider there are obviously many things to consider beyond just compute instance pricing.  Other factors such as storage, network bandwidth, snapshot and replica options and many other factors (and costs) come into play.  Each of these providers offers many different services that may be of differing value to potential customers.

Conclusions (up front for you TL/DR folks):

  • The commonly accepted wisdom is that these providers are locked in a price war and that they have all closely matched each others pricing.  Nothing could be further from the truth.  Instances from Microsoft Azure are dramatically more expensive that Amazon Web Services and Google Cloud Platform no matter how you slice the data.  Google Cloud Platform and Amazon Web Service pricing looks close if you compare total three year costs.  However, how you get those numbers to be close (write AWS big checks upfront) is dramatically different.
  • Based on the numbers we chose for cost of capital (5%) and likely future IaaS price cuts (15% /yr), AWS does in many cases offer the lowest cost three year option IF you are willing to pay substantial amounts upfront.
  • Google Cloud Platform offers extremely competitive pricing with no upfront purchase needed.
  • Windows is expensive.  In some cases the cost difference in a Linux instance and a Windows instance exceeds the cost of the Linux instance itself.  Think about that for a moment.  The cost of your OS choice can more than double the cost of your instance.  I love Microsoft.  I love Windows.  I hope this changes.

Update – 12/6/2016 – A Microsoft rep posted this comment on my LinkedIn post of this article.  Keep this in mind as you compare prices.

If potential Azure customers talk to their local Microsoft sales rep they can chose to buy via a so called “Compute Pre Purchase” option. It will give you up to >45% savings for modern compute instances depending on the location and instance family. You need to decide for a location, instance type and pay for one year upfront but still might be appropriate for many use cases. Microsoft will very soon offer an easier way to leverage those savings and offer more options as well as longer term periods, etc. very soon.

Methodology:

In order to simplify some of the discussion for this purpose of this post, we’ve made the following assumptions.

  • We will look at only four similar instance sizes.
  • We will not consider storage, bandwidth or other costs.  Perhaps that will be a discussion for another post.
  • We will look at the cost difference between running Linux and Windows instances.
  • We will consider and attempt to model the different purchase options available from each provider.
  • We will compare the costs for running these compute instances for both one and three year terms.
  • We will assume 100% sustained use during the entire period considered.

Instance sizes:

  • Small – At least 1 CPU core / ~4GB of RAM
    • Specific instances we chose to compare: AWS: t2.medium / GCP: n1-standard-1 / Azure: GP A2
  • Medium – At least 2 CPU cores / ~8GB of RAM
    • Specific instances we chose to compare: m4.large / GCP: n1-standard-2 / Azure: GP A5
  • Large – At least 4 CPU cores / ~16GB of RAM
    • Specific instances we chose to compare: AWS: m4.xlarge / GCP: n1-standard-4 / Azure: GP A6
  • Extra Large – At least 8 CPU cores / ~30GB of RAM
    • Specific instances we chose to compare: AWS: m4.2xlarge / GCP: n1-standard-8 / Azure: GP A7

There is no perfect way to compare things that are not identical.  So, we have chosen what we believe to be fairly similar instance types to compare.

Provider Pricing Model Discounts:

Each provider offers ways to purchase instances in order to save some money.

Amazon Web Services: AWS offers a variety of purchase options.  These options can result in significant savings.  Explaining how reserved instances work is beyond the scope of this article.  For more detail on this topic go here: https://aws.amazon.com/ec2/purchasing-options/reserved-instances/ .  In general, the longer term you are willing to commit to and the more you are willing to pay upfront, the higher the discount you can get.

Azure: Microsoft Azure offers a flat 5% discount if you are willing to pre-pay for 12 months of service upfront.  https://azure.microsoft.com/en-us/offers/ms-azr-0026p/.  The 5% Microsoft discount is frankly not very enticing compared to the significant discounts you can get from AWS for prepayment, and compared to the discounts you get from Google for simply using instances on a sustained basis.  Since a three year upfront purchase is not possible, when we modeled Azure three year costs we did so by estimating the cost of three annual purchases.

Google Cloud Platform: Google offers great discounts for sustained use.  You don’t need to pre-purchase anything, you get the discounts automatically.  The discounts are very substantial.

Cost of capital:

For purposes of this post we also wanted to consider the cost of capital.  It is also not reasonable to compare spending a large sum of money upfront with spending no money upfront and simply paying for what you use on an ongoing basis.  So, for purposes of this discussion we are going to assign a relatively arbitrary 5% annual cost of capital to options where prepayments are considered.

Expected future IaaS price reductions:

The costs of public cloud IaaS continue to drop.  For purposes of these calculations, when we look at one year costs we will assume that no price drops will happen during the middle of our one year term.  For purposes of our three year estimates, we will assume that a 15% price reduction will happen at the end of year one, and another 15% price reduction will happen at the end of year two.   Obviously, these are best guess estimates and we could easily be wrong.

Shameless Plug:

If your business needs help figuring out how to best architect public cloud infrastructure, we would love to help.

Raw data:

Linux – 1 Year

Note: A 5% cost of capital has been used for these calculations where an upfront purchase was required.

Linux-1year-small

Linux-1year-medium

Linux-1year-large

Linux-1year-extra-large

Linux – 3 Years

Note: A 5% cost of capital has been used for these calculations where an upfront purchase was required.  A 15% annual cost reduction has been estimated.

Linux-3year-small

Linux-3year-medium

Linux-3year-large

Linux-3year-extra-large

Windows – 1 Year

Note: A 5% cost of capital has been used for these calculations where an upfront purchase was required.

Windows-1year-small

Windows-1year-medium

Windows-1year-large

Windows-1year-extra-large

Windows – 3 Years

Note: A 5% cost of capital has been used for these calculations where an upfront purchase was required.  A 15% annual cost reduction has been estimated.

Windows-3year-small

Windows-3year-medium

Windows-3year-large

Windows-3year-extra-large

If you wake up and see IPs you support routing to China, it’s going be a rough day.

If you wake up and see IPs you support routing to China, it’s going to be a rough day.  Today – was a rough day.

  • At 4:35AM EDT my network monitoring system alarmed that a clients site-to-site VPN connection was down between the clients office in NC, and our data center in Atlanta, GA.
  • At ~ 6:15AM EDT I woke up and saw the alarm.  I immediately begin testing / collecting data.  It quickly became obvious that this was a routing issue.  Connectivity between some networks (Road Runner and several others) to our clients data center IPs was broken.  Curiously – traffic from Road Runner / Time Warner Cable was routing out to a router in Los Angeles, CA then dying.
  • In order to open trouble tickets for a routing issue, you need trace routes.  So I collected several showing networks that worked and ones that did not – in both directions.  Then I opened tickets with Road Runner / Time Warner Cable (the clients ISP) and the data center (who provides us IPs as part of a BGP mix of bandwidth they maintain and optimize).
  • After some additional troubleshooting while waiting to hear back on my trouble tickets, I noticed that a new BGP advertisement which included our IPs was published at nearly the exact same time that the site-to-site VPN failed.  I’ve sanitized the screen shot to protect the innocent (my client) and the guilty (a Chinese ISP).  The red blocks contain IP details I’ve intentionally removed.
    bgp_update
  • After some troubleshooting we were able to determine that a Chinese ISP had published a bogus BGP advertisement. The Chinese ISP wrongly advertising the a /20 block of IPs (which included some of ours).  They actually own a /20 that was one character different from the block they advertised.  It appears they simply made a typo somewhere and caused all of this.
  • Our data center NOC team reached out to the Chinese ISP NOC to see if they could get them to remove this wrong advertisement.
  • At 10:25AM EDT our monitoring system recorded the site-to-site VPN coming back online.
  • When I arrived at the client site (where I was scheduled to be today anyway) – I tested and the bogus BGP advertisement had been removed.

So – what is the take away from this?  What can be learned?  Here are a few things – several of which I knew intellectually previously and I know at more of a gut level now.

  • False BGP advertisements can create a real mess.  I knew this previously – but it never impacted me as harshly as it did today.  Want to read more on how bad this can be – check out the BGPMON blog here: http://www.bgpmon.net/blog/.
  • It seems some ISPs filter or manage BGP more carefully than others.  For example – Level 3 never seemed to be effected by this bogus BGP update.  Time Warner / Road Runner apparently accepted it nearly immediately.  I’m no BGP guru at all – but wow improvement is needed here.
  • In the future before I open a routing issue ticket, I’ll take a look not only at trace routes, but also at BGP advertisements.  Huge thanks to Hurricane Electric for a great looking glass tool that ultimately helped me get to the bottom of this.
My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

My experience with my first I.T. security Capture The Flag (CTF) contest while at BSides LasVegas (BSidesLV).

Background:  For the last two years, I’ve gone to the annual BlackHat USA conference in Las Vegas. I’ve loved it both years. The conference quality along with the presentation quality at BlackHat is fantastic. This year I decided to switch it up a bit and go to BSidesLV and do one day at BlackHat (business pass only) after BSides was over.

What is a BSides?  For a lot of good info on this you can go here. Basically, it is a community sponsored I.T. security conference.

What is an I.T. security capture the flag contest?  Essentially it’s a contest where you and your team defend I.T. systems under your control while attacking the systems of other teams.

Pre-Conference: BSides is completely free to attend (which is amazing). I chose to sign up as BSIdesLV sponsor a few weeks before the conference.   I chose the Rock level of sponsorship which was ~$100. I wish I could say I was a really great guy and I just wanted to help out, but the truth is I did this in order to get a reserved ticket. The DefCon (another I.T. security conference in Vegas) ticket line is legendary and I wanted to avoid anything remotely like that if at all possible. I also wanted to be sure I got in. I did not want to go all the way to Vegas only to have BSides run out of passes. So, I forked over my $100 and booked some travel.

While reviewing the BSidesLV web site, I noticed the ProsVsJoes capture the flag contest. I was intrigued and decided to sign up as a “Joe” since I don’t do infosec (Information Security) full time, and I had never participated in a CTF (capture the flag) before.   My plan was to get on a team, contribute where I could, have some fun and learn some stuff. Signing up for the CTF meant that I would miss essentially all of the other BSidesLV sessions. Its rumored that most of the sessions are recorded and posted online shortly after the conference. I hope that is true because there were some really cool looking sessions I would still like to see.

A few days after signing up as a Joe, @dichotomy1 (Dichotomy) dropped me an email and asked if I might be willing to serve as a Pro acting as a team captain for one of the blue (defense) teams. We went back and forth a bit about my credentials and experience. He mentioned that at that point he had three full teams, but if more folks signed up he might like to add a forth team and that team would need a captain. He mentioned most of the team captain duties were management / administrative / coordination in nature, so I agreed to do it if he needed the forth team.

A few days after that he emailed to let me know that our team was a go. He setup a mailing list for our team, I chose the team name Labyrinth Guardians and we were off to the races.

Over the next couple of days, each of our team members introduced themselves over the email list. I started and shared a Google Drive folder and Google Doc that became our team-planning document.   I encouraged our team to take a collaborative approach, and boy did they ever do that.

Everyone started to share ideas and questions in the Google Doc. It started to become clear that I had a group of guys who were really engaged. As an aside – there were female CTF participants; I just did not have any on my team. We all wanted not only to participate, but to win. I setup a group of smaller functional teams in the document, and asked the guys to pick a team to be a part of as their primary focus. We hoped this would help us all be able to get down to business faster in areas where each of us could bring our expertise and background to bear. While most of the guys on my team did not seem do infosec full time either, we had a good array of skills. So, our functional teams ended up being very well rounded.

We decided to schedule a team call to talk through our strategy etc.   As usual, with a group this size we could not find a time that worked for everyone, so we took a time that worked for most of us and ran with it. Initially, I tried to setup a Google Hangout on Air (so we could record the call for other team members). That ended up failing (likely because I did something wrong), so we quickly switched over to a Skype call. It was a messy 15 minutes trying to get the alternate call up, but the team hung in with me. Finally, 5 or 6 of us were on a call together.  We spoke for an hour or two during which time we got to know each other better and planned for another meeting early on the first day of the conference in Vegas. We also spun up a Slack account for our team to use for real time communication. I can’t say enough positive about this tool. It enabled very efficient real time communication that gave us an advantage.

A couple of the guys volunteered their rooms as a meeting spot, and we agreed to meet at 8AM local time in Vegas. The conference and the CTF started at 10AM. So, we had about an hour to get in sync and plan, then we all went over to the conference and got settled in.

Go time – Day 1:  At 10AM, it was supposed to be go time. However, the wireless network for the CTF was not cooperating.   The guys who run the BSidesLV network and Dichotomy were working hard to fix things. Eventually, we got to a state where Dichotomy was able to kick things off. My understanding is that next year they are planning to go wired – which makes lots of sense.

The scenario for the CTF is that we are essentially taking over a network that has previously been run by idiots. Dichotomy called it “horribly vulnerable”. Our job was to keep network services up and running, deal with user requests (tickets), and find flags in our environment.   We were to do all of this, while a red team of professional penetration testers was attacking us.

The game is scored by a proprietary program Dichotomy developed called “Score Bot”. Score Bot periodically measured our service uptime, how many tickets we had closed, how many flags we had submitted, and how many of our hosts had been compromised by the red teamers. When things kicked off, our guys went to work in their functional team areas. The windows team guys went to work on the windows boxes, the *nix team went to work on the *nix boxed etc.

We were doing fairly well midday on the first day. However, we were heavily focused on finding and submitting flags. We had found several but there was some significant ambiguity in how we were to submit these to score bot. Several of our guys banged away on this for a while. Eventually, they figured it out after expending some significant time. One of our team members noticed that flags did not count for very many points, but that closing tickets from fake users counted for a lot. So, we started to prioritize dealing with tickets higher.

At the end of day one, we had won. A screen shot of Score Bot is below. We had done a decent job of keeping attackers out of our boxes all while closing tickets like mad men. Team morale was high. We were excited to have won day one, but we all felt like we were also a bit lucky. Our expectation was that the work we did on day one securing boxes would pay off on day two. So, we all headed our separate ways and agreed to meet in the CTF area at 9AM on day two. CTF - DAY1 - FINAL Scores Day 2 – Attack time:  I think we were all looking forward to day two. On day two, the red team members break up and embed with the blue teams. Then, each blue team goes to work attacking each other. In general, the majority of our team seemed to have more experience on the defense side of things. So, day two was a great opportunity to learn from a pro red teamer. We got a great red teamer on our team. He quickly engaged and brought his experience to bear for our cause.

Before we started on day two, we also discovered that the entire environment was going to be reset to the state it was in the prior day before we started. So, all of our defense work from day one was effectively lost, and we had to do it again. In addition the scoring model was going to be changed. Tickets would count for fewer points, and flags would count for more points. Since we knew the environment we hit the ground running quickly. We got boxes locked down as much as possible while maintaining service uptime. We submitted flags like mad men. We also went to work scanning both our our own network as well as our adversaries networks for vulnerable systems. We used NMAP, Nessus and Open VAS for this work.

Once we identified vulnerable hosts on our network we mitigated the vulnerabilities as quickly as we could. However, we identified multiple RCE (remote code execution) vulnerabilities that we were never able to fix due to issues in the environment. After we had our stuff at least heading mostly in the right direction, we started kicking off scans of the competition. Once we identified vulnerable hosts on the other teams networks, we worked with our red team member to start attacking them. Fortunately, our work on the defense side of things along with a bit of luck paid off well. By mid afternoon, none of our boxes had been compromised yet. All of the other teams had boxes that had been burned, some thanks to our guys working with our red teamer. At one point Dichotomy was essentially asking the other team to focus on us and break into our boxes. Since we still had some unresolved vulnerable systems, eventually they did get in to a couple of our lower priority boxes. ComeAtMeBro copy We started to focus heavily on offense later in the afternoon. We identified the team that was closest to us in points (SaltyGoats – SG) and went after them with everything we had. We had identified some unresolved vulnerabilities in their environment, so our red team pro went to work. After about an hour of focused work, he had compromised their Active Director Domain Controller (AD DC), so we essentially owned their windows network. Our red teamer did some work so that Score Bot would know that a couple of our competitors boxes were owned, then we started to plan for what we would do once the “scorched earth” rules went into effect at 4PM. Prior to those rules going into effect, all we could do was signal Score Bot that we owned these boxes. We were not allowed to destroy them. However, once the scorched earth rules started, it was open season.

Because of the way the scoring engine worked (which was public knowledge from the beginning), a down domain controller would cause our competition to loose all availability points (because DNS in the environment would fail which all the rest of the scoring of their environment was dependent on). So, we started to talk about how to kill their AD box, which we had gained administrator level access to. In the end we opted to break the box in a way that would not allow it to boot (marked the boot partition inactive using diskpart), then simply rebooted it. After a couple of minutes their network went all red.  Mission accomplished. CTF - FINAL DAY2 Scores - 8PM EDT After that a few of our guys continued to focus on attacking, but at that point we had pretty well established a point lead that would be hard for others to catch up to in the time remaining. At 5PM Dichotomy had the red teamers come up and give a presentation on how they attacked us. We did not learn anything especially surprising, but it was a good overview and good education. In the end – we won. The official scores were: Screen Shot 2015-08-07 at 5.13.49 PM These scores differ from the screen shots shown because positive ticket scores did not count on day two (one of the rule changes). So, essentially our total score minus our positive ticket scores equaled our ending score.  Also – its doubtful I got my screen shot at the exact last moment.

Thanks!  I believe all of our guys had a good time. I know I sure did! With that in mind, I’d like to thank some folks who worked hard so that we could enjoy this experience.

  • The folks who got BSides started, and have grown it into an amazing movement. For some great back story, check out this podcast with @Jack_Daniel where he explains how this got started. While I’ve never had the pleasure of meeting Jack, I have seen him in the wild and he is an incredibly well spoken (not to mention well dressed) guy. He is an absolute gift to the infosec community.  Clearly one of the really good guys.
  • The entire team who makes BSidesLV happen. Organizing a conference this large is an enormous amount of work. Organizing a free community driven conference on a limited budget with only a group of volunteers and pulling it off as well as these folks do is an absolute work of art. You folks are amazing in every sense of the word.
  • The BSidesLV network team. You all showed tremendous grace under fire dealing with the CTF WiFi issues. Well done!
  • Dichotomy – Dichotomy put together a really great CTF event for us to participate in. The amount of work required to create this game environment must have been huge. Thanks man! You gave us all a great opportunity to learn, have fun, and grow. We appreciate it! I’ll forever remember this fondly as my first CTF.
  • My team (LabyrinthGuardinas)– You guys ROCKED! It was great to work with such a good group of guys who were willing to do whatever was needed to succeed. Your skills, patience, flexibility, creativity and generally awesome ideas are what allowed us to win. I hope you guys had a much fun as I did.
  • Our friendly competition: Lets be honest – any of us could have won this thing.  Our friendly competition is just as smart as we are. I spoke to one of the other team captains after we were done on day two, and he was just a great guy. I hope we get the opportunity to get to know you guys better in the years ahead.

What’s next?  I want to contribute to the BSides movement. This year, I basically enjoyed an awesome event because of the work of a lot of dedicated folks.  Now I feel like its time for me to get more involved and contribute.  I’m not sure exactly where that is yet – but I’m determined to help where I can.  What about you?

Related Posts: