Deserted Island DevOps Postmortem

 0 Posted on by

In my experience, it’s the ideas that you don’t expect to work that really take off. When I registered a domain name a month ago for Deserted Island DevOps, I can say pretty confidently that I didn’t expect it to turn into an event with over 8500 viewers. Now that we’re on the other side of it, I figured I should write the story about how it came to be, how I produced it, and talk about some things that went well and some things we could have done better.

Continue reading "Deserted Island DevOps Postmortem"

Mono in Debian 9 Containers

 0 Posted on by

Running Debian 9 and need to install the mono repository? You'll find advice for 8 that suggests using the following:

sudo apt install apt-transport-https dirmngrnsudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFnecho "deb https://download.mono-project.com/repo/debian stable-stretch main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.listnsudo apt update

When it comes time to docker build, you might see the following:

Step 6/12 : RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EFn---> Running in abbbdefb9d15nExecuting: /tmp/apt-key-gpghome.GbZgRWnneE/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EFngpg: cannot open '/dev/tty': No such device or addressnThe command '/bin/sh -c apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EF' returned a non-zero code: 2

Don't despair! The following line in your Dockerfile (replacing the apt-key adv command) will get you going:

RUN curl https://download.mono-project.com/repo/xamarin.gpg | apt-key add -

OpenTracing for ASP.NET MVC and WebAPI

 0 Posted on by

Preface - I really like what Microsoft is doing with .NET Core and ASP.NET Core.

However, the horror they've unleashed upon the world in the form of ASP.NET MVC and WebAPI is a sin that will take more than a few moons to wash away. That said, quite a few people are still building software using this stuff and I got curious how you'd do instrumentation of it via OpenTracing. This post is the result of several hours of hacking towards that end.

Action Filters For Fun And Profit

It's actually pretty straightforward, assuming you know what to Google and can handle the absolute state of documentation that's available. At a high level, here's how it works. ASP.NET - similar to Java Servlets - provides Action Filters which are simple lifecycle hooks into the HTTP request pipeline. There's four interfaces you can target if you want to be more specific, but a fairly trivial implementation of a Logger can be done like so:

public class CustomLogger : ActionFilterAttributen{n    public override void OnActionExecuting(ActionExecutingContext filterContext) n    {n        Debug.WriteLine($"executing controller: {filterContext.RouteData.Values["controller"]}");n        // etc etc...n    }npublic ovveride void OnResultExecuted(ActionExecutingContext filterContext) n    {n        Debug.WriteLine($"result complete in controller: {filterContext.RouteData.Values["controller"]}");n        // etc etc...n    }n}

Pretty straightforward, like I said. There's also OnActionExecuted and OnResultExecuted which are called after and before a controller action, and controller action result, respectively.

So you'd think it'd be pretty easy, right? OpenTracing provides a handy GlobalTracer singleton, so create a TracingFilter...

public class TracingFilter : ActionFilterAttributen{n    public override void OnActionExecuting(ActionExecutingContext filterContext)n        {n            var routeValues = filterContext.RouteData.Values;n            var scope = GlobalTracer.Instance.BuildSpan($"{routeValues["controller"]}").StartActive();n            scope.Span.SetTag("action", routeValues["action"].ToString());n        }npublic override void OnResultExecuted(ResultExecutedContext filterContext)n        {n            var scope = GlobalTracer.Instance.ScopeManager.Active;n            scope.Span.Finish();n        }n}

Then in your RegisterGlobalFilters method, do a quick filters.Add(new TracingFilter()), register a Tracer, and away you go! Right?

Wrong.

Well, half-right.

That Sounds Like Me, Yeah.

Assuming you're only using MVC, you're right. So you'll see spans for, say, GETting your index page, but not for any of your API routes. Why? Because there's two ActionFilterAttributes. The one we just did is System.Web.Mvc.ActionFilterAttribute. Want your WebAPI traced too? Time to create a System.Web.Http.Filters.ActionFilterAttribute. You can tell them apart by the extremely different method signatures, as seen here -

public class WebApiTracingFilter : ActionFilterAttributen    {n        public override void OnActionExecuting(HttpActionContext actionContext)n        {n            var scope = GlobalTracer.Instance.BuildSpan(actionContext.ControllerContext.ControllerDescriptor.ControllerName).StartActive();n            scope.Span.SetTag("action", actionContext.ActionDescriptor.ActionName);n        }npublic override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)n        {n            var scope = GlobalTracer.Instance.ScopeManager.Active;n            scope.Span.Finish();n        }n    }

Yeah, that took me a few minutes and this StackOverflow answer to puzzle out. c'est la vie.

That said, this is pretty much the hard part. Since you've got spans being automagically started and finished whenever the request pipeline hits, you can implicitly utilize those parent spans inside a controller to create children:

[WebApiTracingFilter]npublic class ValuesController : ApiControllern{n    public IEnumerable<string> Get()n    {n        var returnValue = getCurrentTime();n        return new string[] { returnValue };n    }nprivate string getCurrentTime()n    {n        using (var scope = GlobalTracer.Instance.BuildSpan("getCurrentTime").StartActive())n        {n            return DateTime.Now.ToShortDateString();n        }n            n    }n    n    // and so forth...n}

You can also get fancy with your OnActionExecuted/OnResultExecuted filters by checking for exceptions coming in and adding stack traces to your span logs.

If you'd like to check out the complete sample project I made, it's on GitHub.

Automating Jira Ticket Creation with Python

 0 Posted on by

Jira - love it, hate it, begrudgingly accept it - it's a fact of life for many of us in the software world. One thing that particularly sucks about Jira to me is that there appears to be an eternal tension regarding process.

You've probably got a boatload of various processes that you'd like to be somewhat repeatable and easy to discover. In my career, I've seen these processes be documented in a variety of places. Wikis, random Word documents on a shared drive, a shared Google Document, a single Jira ticket that's cloned over and over... the list goes on. The problem always comes when you want to embed some sort of information in these tickets (for instance, version numbers for a deployment process to better disambiguate which tickets match up to which deployment) and you want to do it in a way that's easily accessible and versioned.

At Apprenda, we've been striving to become more releaseable. Part of this is the automation of our release process, which fits the above criteria to a T. I wrote a little utility that helps us create a series of tickets for release processes, and I'll go ahead and talk about it here and share it with you, Internet, in the hopes that it may help someone else who found themselves in my shoes.

ticketgen

The repository is located here and will probably be helpful to refer to.

This tool makes a lot of assumptions about process, first off. Mostly that it's being used for a particular process - a new release of an existing piece of software that requires end-to-end tests of multiple scenarios. Of course, you can fork it and make it do whatever you want. I've added some defaults to the options file that should indicate how it's used.

There's a few interesting scenarios exposed within it, though. If we look at the install/upgrade section of options.ini, we can see one.

[CleanInstallSection]nsummary: rel-{0}: Clean install on {1} cloud.ndescription: This is an automatically generated stub!nn[UpgradeSection]nsummary: rel-{0}: Upgrade from {1} on {2} cloud.ndescription: This is a automatically generated stub!

The UpgradeSection specifically calls out a particular cloud type, which is hard-coded as either 'single' or 'hybrid' cloud in the script. This could be changed to some other interesting configuration for your purposes. There's a format_summary and format_description method on the Ticket class that will let you pass any number of values into those fields, so you could do something like this if you had more tokens you wanted to switch in:

summary: rel-{0}: {1} test on {2} config with {3} providerndescription: Run the {0} test suite with the {1} option enabled on {2}nn// Then process those like so:nnticket.format_summary("1.0.0", "regression", "secure", "aws")nticket.format_description("regression", "secure", "aws")

You could also add those values to the options file and iterate over them.

How It's Helping

Since this all runs inside a Docker container, it can easily be added into a CI/CD pipeline. Preprocess the options file, swap out version info or add it as needed, then build and run the new container.

For some perspective, we're using this to create about 30-odd tickets for each release we ship. Previously, we used a shared wiki page that multiple people would edit and 'mark off' what they were working on. The advantage of the Jira-based solution is that it's easier at-a-glance to see what needs to be done, what hasn't been done, and how these tasks grow or shrink over releases. I've especially found that having this information in Jira is beneficial when trying to communicate across the business and demonstrate areas where improved automation or tooling would be beneficial. It's also been useful for people who aren't attached to the release process to understand where the team is with shipping and what's left to be done.

I hope you'll find this useful in some way; When I decided to create this script, I didn't really find anything that seemed to help me do the job I wanted to do.

“Alexa, do Standup”

 0 Posted on by

Since I joined the Apprenda team, the ritual of daily R&D team standups have been a pretty constant companion. Being able to to get a ten-thousand foot view of our progress helps keep everyone on the same page, even as our team has grown over the years. One of the rituals of our morning standups has been the deployment report, where we're updated on how nightly tests and deployments of the Apprenda Cloud Platform have fared.

As a member of our tools and infrastructure team, I'm always on the lookout for ways to improve developer efficiency; I'm also a big gadget fan. The latter has lead me to develop quite the collection of Amazon Echo devices, the former lead me down the road of trying to invite Alexa into our daily standups. In this post, I'd like to show you one of the results of that, along with some sample code and thoughts on how to bring Alexa into your daily standups.

The Problem

Let's say hello to our friendly test environment, we'll call it 'Bourbon'. Bourbon is a relatively small test environment, and for us, is just a group of various Apprenda Cloud Platform (ACP from here on out) features and environment specifications that we can refer to. Every day, we take the most recent version of ACP and install it to Bourbon, which results in a successful deployment in our TeamCity instance.

However, sometimes, disaster can strike! Bourbon might not get deployed correctly, or it might have encountered a problem when performing a deployment step. Previously, we would have an engineer look through the TeamCity results page every morning before standup and prepare a report of what succeeded, what failed, and what versions/branches were deployed.

So, let's figure out how to get Alexa to tell us what's going on instead.

Talking to TeamCity

We use an internal tool known as Gauntlet to store information about our test environments. Gauntlet is a .NET service, so the first part of our new Alexa service will leverage it.

First, we'll want to define a quick model -

public class DeploymentStatusResourcen{n    public string DeploymentStatus { get; set; }n    public string EnvironmentName { get; set; }n    public string Package { get; set; }n    public string PreUpgradePackage { get; set; }n    public string Type { get; set; }n    public DateTime DeploymentTime { get; set; } n}

Nothing too fancy so far, just the information we care about. Since TeamCity returns success or failure as a string, we'll preserve the information as a string in order to support more detailed information at a later time.

We grab the current environment list from Gauntlet, and then use FluentTc to query our TeamCity instance for all builds within the past 24 hours.

var tc = new RemoteTc().Connect(x => x.ToHost("teamcity").AsUser("username", "password"));nvar builds = tc.GetBuilds(x => x.SinceDate(DateTime.Now.AddDays(-1)),n                               x => x.IncludeDefaults()n                                     .IncludeStartDate()n                                     .IncludeFinishDate()n                                     .IncludeStatusText());

We're not done yet, though. TeamCity's REST API does not inflate the resources returned from a call to `GetBuilds`, so we need to go back to the well to get more information (such as the name of the build configuration!)

foreach (var build in builds) {n  try {n        inflatedBuilds.Add(tc.GetLastBuild(x => x.Id(build.Id)));n    }n  catch (Exception ex) {n        // Handle Me!n    }n  }

Finally, we filter builds to active environments and create our list of deployments.

var deployments = inflatedBuilds.Where(x => x.BuildConfiguration.Name.ToLower().Contains(environmentName));nforeach (var build in deployments)n{n    var item = new DeploymentStatusResource();n    item.DeploymentStatus = build.Status.ToString();n    item.EnvironmentName = environmentName;n    item.DeploymentTime = build.FinishDate;n    if (!build.Properties.Property.Exists(x => x.Name == "upgradePackage"))n    {n        item.Package =n            build.Properties.Property.Find(x => x.Name == "branch").Value;n        item.Type = "Install";n    }n    elsen    {n        item.Package = build.Properties.Property.Find(x => x.Name == "upgradePackage").Value;n        item.Type = "Upgrade";n        item.PreUpgradePackage = build.Properties.Property.Find(x => x.Name == "preUpgradePackage").Value;n    }n    DeploymentList.Add(item);n}

Finally, create a new controller and route for the endpoint, and we can GET {server}/api/reports/deployments to receive a response that includes items such as this -

[n  {n    "deploymentStatus": "Success",n    "environmentName": "bourbon",n    "package": "feature-coolnewfeature",n    "preUpgradePackage": null,n    "type": "Install",n    "deploymentTime": "2017-01-06T03:29:06.243Z"n  },n  ...n]

Great! So, now what?

High-Level Design

For running our Alexa skill, we'd like to use AWS Lambda. It's free for up to a million requests a month, which is far less than we'll possibly need for a primarily internal service. Lambda also has a convenient integration with the Alexa developer portal and tools.

As I mentioned in part one, we're pulling data from an HTTP service that's part of a larger internal service. Placing the endpoint for this service on the public internet isn't really an option! So, how to get data out of it?

Since we don't need real-time resolution of these test deployments (given that they generally only run a few times a day and can take some time to perform), we'll use a small Golang application that runs on a schedule in order to exfiltrate our data to a AWS S3 bucket that the Lambda pulls from.

Getting Data Out To S3

Our data is pretty straightfoward, we can represent it as a simple text file in JSON. With that in mind, I created a simple Golang application that I'll run via Docker. The code for this is below:

package mainnnimport (n    "fmt"n    "net/http"n    "os"nn    "github.com/aws/aws-sdk-go/aws"n    "github.com/aws/aws-sdk-go/aws/credentials"n    "github.com/aws/aws-sdk-go/aws/session"n    "github.com/aws/aws-sdk-go/service/s3/s3manager"n)nnfunc main() {n    url := "service url"n    res, err := http.Get(url) n        // make sure you handle errors in your own code!nn    defer res.Body.Close()n    fmt.Println("Uploading report to S3.")n  n    creds := credentials.NewStaticCredentials(os.Getenv("AWS_ACCESS_KEY"), os.Getenv("AWS_SECRET_ACCESS_KEY"), "") nn    sesh := session.New(&aws.Config{n        Credentials: creds,n        Region:      aws.String("us-east-1"),n    })nn    uploader := s3manager.NewUploader(sesh)n    s3res, err := uploader.Upload(&s3manager.UploadInput{n        Bucket: aws.String("bucket-name"),n        Key:    aws.String("deploymentreport"),n        Body:   res.Body,n    })nn    fmt.Println("Uploaded file to ", s3res.Location)n}

The corresponding Dockerfile is equally straightfoward:

FROM golang:onbuildnENV AWS_ACCESS_KEY MyAccessKeynENV AWS_SECRET_ACCESS_KEY MySecretKey

Remember - never commit AWS keys to a git repository! Consider using key management to store secrets.

For my purposes, we can simply build and copy the Docker image to another host; docker build -t publish_srv && docker save -o publish_img publish_srv. Copy the tarfile to your Docker host however you prefer, and load it via docker load -i path/to/img.

I chose to use cron on my Docker host to docker run publish_srv at a regular interval. Other options exist as well, it's possible to leave the container and application running constantly and schedule the execution of the task at some defined period.

The Joy of the Cloud

"Wait, why use S3? Why not publish results to some sort of document store, or a relational database?" Why not use S3? It's dirt-cheap for something that is being pushed only several times a day (consider that PUT requests are billed at $0.005/1,000) and each result is only a few kb in size. One of the biggest challenges when transitioning to cloud-native is breaking the mental model of trying to fit all of your pegs into database-shaped holes. A point to Amazon here as well; S3 is incredibly easy to use from Lambda. Lambda functions have the API keys for their roles available during Lambda execution, which means you don't have to fiddle with secrets management in Lambda functions. That doesn't mean you can't, obviously, but why wouldn't you in this case?

Being able to utilize S3 as a go-between for internal providers of data and external consumers of data grants us the ability to begin extending and refactoring legacy applications and services into cloud-native patterns. In fact, for many internal applications, S3 or other scalable cloud storage might wind up being the only data store you actually need.

Creating and Bootstrapping an Alexa Skill

First, you'll need an Amazon account and an Amazon developer account. If you want to test your skill on a live Echo (without publishing it), make sure you use the same Amazon account that an Echo device is registered to.

Log in to the Amazon Developer Portal, and under the Alexa tab, click 'Get started' on the Alexa Skills Kit entry. On the next page, you'll want to create a new skill and enter some basic information about your application.

You might note that there's an awful lot going on here - Interaction Model, Configuration, and so forth. For now, let's gloss over a lot of these details and select 'Custom Interaction Model' and enter a Skill name and an Invocation name. The latter is how users will interact with your skill, in this case, someone would say "Alexa, ask Reportamatic..." and continue with their interaction from there. Let's figure that out before we go any further.

Technically, the only thing you need to do is create an Application that supports the requests from the Alexa service and responds appropriately, which leaves quite a bit of room for individual implementations in whatever language you might prefer. If you're running on Lambda, you have several options - C#, Java 8, node.js 4.3, or Python 2.7. To speed up development of basic skills, there's several frameworks that you can avail yourself of, including the alexa-app and alexa-app-server projects.

I don't mind node, so let's go ahead and use that. The full use of both of these packages is a little outside the scope of this post, but it's not much harder than npm install alexa-app-server --save and creating new skills in your servers app path. Again, see the full documentation on GitHub for more details. The framework lets us quickly build intents and interaction models through extra parameters passed into the app.intent function. First things first, let's create the application -

var alexa = require('alexa-app');nvar AWS = require('aws-sdk')nnmodule.change_code = 1;nnvar app = new alexa.app('deploymentreport');nvar s3 = new AWS.S3();nnapp.launch(function(req, res) {n  var prompt = "Ask me for the deployment report, or for a report on a specific environment";n  res.say(prompt).reprompt(prompt);n});

Our imports are fairly straightforward; the alexa-app framework, and the AWS SDK for node.js. module.change_code = 1 enables hot-reload of our module when executed in the alexa-app-server. Finally, we create an application and assign the Launch request. This is essentially the default command passed to an Alexa skill, and is triggered when a user invokes the skill without any other invocation. res.say sends a block of text back out to the Alexa service that will be translated into speech and output from the user's Echo.

Now, behind the scenes, this is all just a bunch of requests coming and going. For instance, here's the JSON for a LaunchRequest -

{n  "version": "1.0",n  "session": {n    "new": true,n    "sessionId": "amzn.test.id",n    "application": {n      "applicationId": "amzn.test.app.id"n    },n    "attributes": {},n    "user": {n      "userId": "amzn.test.user.id"n    }n  },n  "request": {n    "type": "LaunchRequest",n    "requestId": "amzn1.echo-api.request.test",n    "timestamp": "2015-05-13T12:34:56Z"n  }n}

This is the basic format for requests from the Alexa service into your Lambda; Sessions are important if you're dealing with conversations or multi-stage interactions, as you'll need to read and write information from and to them to persist data between steps. The request object itself is where you'll find information such as intents, mapped utterances, and so forth. For comparison, here's the request object for a specific intent.

{n    "type": "IntentRequest",n    "requestId": "amzn.request.test.id",n    "timestamp": "2015-05-13T12:34:56Z",n    "intent": {n      "name": "SpecificReportIntent",n      "slots": {n        "NAME": {n          "value": "environment1",n          "name": "NAME"n        }n      }n    }n}

Thankfully, we have a convenient way to deal with these requests in our framework - app.intent.

app.intent('SpecificReportIntent',n          // this argument is optional and will create a intent schema and utterances for youn          {n          "slots": {n            "NAME": "EnvironmentName"n          },n          "utterances":[n            "{what is the deployment report|get deployment report|get report} for {-|EnvironmentName}"n          ] n          },n          // do the actual work of the intent.n          function(req, res) {n            s3.getObject({ Bucket: "my-bucket", Key: "deploymentreport" }, n            function(e, s3res) {n              var data = JSON.parse(s3res.Body.toString());n              // simply match the environment name sent in through the intent to the data we're getting from the reportn              var match = data.filter(function(item) {n                return item.environmentName.toLowerCase() == envName;n              });n              // parse the object for interesting information and build the responsen              if (match.length > 0) {n                res.say(`${envName} deployment was an ${match[0].type} to ${match[0].package} and it was ${match[0].deploymentStatus}`)n              } else {n                res.say('I could not find a report for that environment.')n              }n              // make sure to explicitly send the responsen              res.send();n            });n            // watch out, since the call to get our object is async, we need to immediately return false (library design concern)n            return false;n          }n);

Ultimately, we're simply taking an array of JSON like we defined all the way back in part one and searching for a name match. How does Alexa know what intent to call, though? That's where the intent schema and sample utterances come in.

Schematically Speaking...

Another convenience of our library is it can work in conjunction with the alexa-app-server to automatically generate an intent schema. Intent schemas are essentially mappings that let the Alexa service know what request to send to your application in response to your voice. Here's the schema for our SpecificReportIntent.

{n  "intents": [n    {n      "intent": "SpecificReportIntent",n      "slots": [n        {n          "name": "NAME",n          "type": "EnvironmentName"n        }n      ]n  ]n}

Pretty simple, yeah? What's that EnvironmentName type, though? Alexa allows us to define a Custom Slot Type, a list of words it should try to match to. This improves voice recognition greatly, as the voice recognition attempts to map utterances to a known set of phonemes. We set up the Intent Schema, Custom Slot Types, and Sample Utterance back in the Amazon Developer Portal.

Take note! Your schema and custom type may be small, but your sample utterances will probably not be! Your utterances need to capture _all_ of the ways a user might interact with your skill. One of the topics we haven't touched on at all is developing a quality Voice UI (VUI), and if you're planning on doing Alexa skills 'for real' then you should certainly invest quite a bit of time on designing the VUI. Utterances aren't terribly discoverable, after all, and people from different cultural or educational backgrounds may say the same thing in subtly different ways.

Let's finish our skill up with a final intent, one where we can get all of the available reports.

app.intent('AllReportIntent', {n    "utterances":["{what is the deployment report|get the deployment report|what were deployments like}"]n    }, function(req, res) {n        s3.getObject({Bucket: "my-bucket", Key: "deploymentreport"}, function(e, s3res) {n          var data = JSON.parse(s3res.Body.toString());n          data.forEach(function(item) {n            res.say(`${item.environmentName} deployment was an ${item.type} to ${item.package} and it was a ${item.deploymentStatus} <break time="1s"/>`)n          });n          res.send();n        });n        return false;n      }n);

One thing to point out - see that, at the end of our res.say call? Since the text that's sent back is interpreted as SSML, you're able to add various pauses or instructions for how it should be spoken.

At the end of our declarations, we need to export our application via `module.exports = app;` and then we're done with node for the time being with node. To deploy your skill to Lambda, simply make a zip file of its package.json, node_modules, and all .js files in the folder, and upload it as a new Lambda service. This requires an AWS account, which again, is slightly outside the scope of this post. I will note that when you make the Lambda function, you'll need to create a IAM role to execute the function under. Please see AWS documentation for more information on how to configure this role.

Back in the Amazon Developer Portal, one last thing to do. First, get the ARN ID of your Lambda function (upper-right corner of the Lambda page) and copy it. In the Developer Portal, under the 'Configuration' option, you'll see a space to enter it.

With that, you're pretty much done! You should be able to go into the test tab, send a sample request, and see the appropriate response. You should also be able to query an Echo device on your developer account with one of your intents and have it respond to you.

Wrapping Up

This is, of course, a pretty simple example - we didn't implement a lot of sorting, filtering, or other conversational options on our data. Once I have time, I plan to add more information to the data from our internal systems, so that users can get more details (such as what tests passed or failed) and have conversations with the skill (rather than having it simply read out a list of items). However, I hope that you'll take the ideas and samples in this series and use it to build something amazing for your team! If you've got any questions or want to share some cool stuff you've built with Alexa, you can find me on Twitter @austinlparker or via e-mail [email protected].