The Commodification of DevOps

 0 Posted on by

"We are uncovering better ways of developing software by doing it and helping others do it."

The Agile Manifesto, 2001.

It's been quipped more than once that most amazing Silicon Valley innovations are simply a bunch of nerds poorly recreating a service that already exists, but with an app. While I find this to be in some ways a truism (after all, there is nothing new under the sun), it's a fairly trite observation. What's far more interesting is how the organizations that build and deliver these 'innovations' themselves develop, and the process of that development is especially interesting due to the pressure-cooker of free money and labor elasticity that has characterized the 'startup economy' over the past twenty years or so. What's any of this have to do with DevOps, you may ask? Simply this -- DevOps is a reaction to the commodification of Agile, and the rise of SRE is a reaction to the commodification of DevOps. To reduce the thesis further, many of the trends you see in software development and delivery can be understood as a cyclical reaction to anarchists running headlong into the invisible backhand of the free market.

Continue reading "The Commodification of DevOps"

Deserted Island DevOps Postmortem

 0 Posted on by

In my experience, it’s the ideas that you don’t expect to work that really take off. When I registered a domain name a month ago for Deserted Island DevOps, I can say pretty confidently that I didn’t expect it to turn into an event with over 8500 viewers. Now that we’re on the other side of it, I figured I should write the story about how it came to be, how I produced it, and talk about some things that went well and some things we could have done better.

Continue reading "Deserted Island DevOps Postmortem"

Mono in Debian 9 Containers

 0 Posted on by

Running Debian 9 and need to install the mono repository? You'll find advice for 8 that suggests using the following:

sudo apt install apt-transport-https dirmngrnsudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EFnecho "deb https://download.mono-project.com/repo/debian stable-stretch main" | sudo tee /etc/apt/sources.list.d/mono-official-stable.listnsudo apt update

When it comes time to docker build, you might see the following:

Step 6/12 : RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EFn---> Running in abbbdefb9d15nExecuting: /tmp/apt-key-gpghome.GbZgRWnneE/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EFngpg: cannot open '/dev/tty': No such device or addressnThe command '/bin/sh -c apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys A6A19B38D3D831EF' returned a non-zero code: 2

Don't despair! The following line in your Dockerfile (replacing the apt-key adv command) will get you going:

RUN curl https://download.mono-project.com/repo/xamarin.gpg | apt-key add -

OpenTracing for ASP.NET MVC and WebAPI

 0 Posted on by

Preface - I really like what Microsoft is doing with .NET Core and ASP.NET Core.

However, the horror they've unleashed upon the world in the form of ASP.NET MVC and WebAPI is a sin that will take more than a few moons to wash away. That said, quite a few people are still building software using this stuff and I got curious how you'd do instrumentation of it via OpenTracing. This post is the result of several hours of hacking towards that end.

Action Filters For Fun And Profit

It's actually pretty straightforward, assuming you know what to Google and can handle the absolute state of documentation that's available. At a high level, here's how it works. ASP.NET - similar to Java Servlets - provides Action Filters which are simple lifecycle hooks into the HTTP request pipeline. There's four interfaces you can target if you want to be more specific, but a fairly trivial implementation of a Logger can be done like so:

public class CustomLogger : ActionFilterAttributen{n    public override void OnActionExecuting(ActionExecutingContext filterContext) n    {n        Debug.WriteLine($"executing controller: {filterContext.RouteData.Values["controller"]}");n        // etc etc...n    }npublic ovveride void OnResultExecuted(ActionExecutingContext filterContext) n    {n        Debug.WriteLine($"result complete in controller: {filterContext.RouteData.Values["controller"]}");n        // etc etc...n    }n}

Pretty straightforward, like I said. There's also OnActionExecuted and OnResultExecuted which are called after and before a controller action, and controller action result, respectively.

So you'd think it'd be pretty easy, right? OpenTracing provides a handy GlobalTracer singleton, so create a TracingFilter...

public class TracingFilter : ActionFilterAttributen{n    public override void OnActionExecuting(ActionExecutingContext filterContext)n        {n            var routeValues = filterContext.RouteData.Values;n            var scope = GlobalTracer.Instance.BuildSpan($"{routeValues["controller"]}").StartActive();n            scope.Span.SetTag("action", routeValues["action"].ToString());n        }npublic override void OnResultExecuted(ResultExecutedContext filterContext)n        {n            var scope = GlobalTracer.Instance.ScopeManager.Active;n            scope.Span.Finish();n        }n}

Then in your RegisterGlobalFilters method, do a quick filters.Add(new TracingFilter()), register a Tracer, and away you go! Right?

Wrong.

Well, half-right.

That Sounds Like Me, Yeah.

Assuming you're only using MVC, you're right. So you'll see spans for, say, GETting your index page, but not for any of your API routes. Why? Because there's two ActionFilterAttributes. The one we just did is System.Web.Mvc.ActionFilterAttribute. Want your WebAPI traced too? Time to create a System.Web.Http.Filters.ActionFilterAttribute. You can tell them apart by the extremely different method signatures, as seen here -

public class WebApiTracingFilter : ActionFilterAttributen    {n        public override void OnActionExecuting(HttpActionContext actionContext)n        {n            var scope = GlobalTracer.Instance.BuildSpan(actionContext.ControllerContext.ControllerDescriptor.ControllerName).StartActive();n            scope.Span.SetTag("action", actionContext.ActionDescriptor.ActionName);n        }npublic override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)n        {n            var scope = GlobalTracer.Instance.ScopeManager.Active;n            scope.Span.Finish();n        }n    }

Yeah, that took me a few minutes and this StackOverflow answer to puzzle out. c'est la vie.

That said, this is pretty much the hard part. Since you've got spans being automagically started and finished whenever the request pipeline hits, you can implicitly utilize those parent spans inside a controller to create children:

[WebApiTracingFilter]npublic class ValuesController : ApiControllern{n    public IEnumerable<string> Get()n    {n        var returnValue = getCurrentTime();n        return new string[] { returnValue };n    }nprivate string getCurrentTime()n    {n        using (var scope = GlobalTracer.Instance.BuildSpan("getCurrentTime").StartActive())n        {n            return DateTime.Now.ToShortDateString();n        }n            n    }n    n    // and so forth...n}

You can also get fancy with your OnActionExecuted/OnResultExecuted filters by checking for exceptions coming in and adding stack traces to your span logs.

If you'd like to check out the complete sample project I made, it's on GitHub.

Automating Jira Ticket Creation with Python

 0 Posted on by

Jira - love it, hate it, begrudgingly accept it - it's a fact of life for many of us in the software world. One thing that particularly sucks about Jira to me is that there appears to be an eternal tension regarding process.

You've probably got a boatload of various processes that you'd like to be somewhat repeatable and easy to discover. In my career, I've seen these processes be documented in a variety of places. Wikis, random Word documents on a shared drive, a shared Google Document, a single Jira ticket that's cloned over and over... the list goes on. The problem always comes when you want to embed some sort of information in these tickets (for instance, version numbers for a deployment process to better disambiguate which tickets match up to which deployment) and you want to do it in a way that's easily accessible and versioned.

At Apprenda, we've been striving to become more releaseable. Part of this is the automation of our release process, which fits the above criteria to a T. I wrote a little utility that helps us create a series of tickets for release processes, and I'll go ahead and talk about it here and share it with you, Internet, in the hopes that it may help someone else who found themselves in my shoes.

ticketgen

The repository is located here and will probably be helpful to refer to.

This tool makes a lot of assumptions about process, first off. Mostly that it's being used for a particular process - a new release of an existing piece of software that requires end-to-end tests of multiple scenarios. Of course, you can fork it and make it do whatever you want. I've added some defaults to the options file that should indicate how it's used.

There's a few interesting scenarios exposed within it, though. If we look at the install/upgrade section of options.ini, we can see one.

[CleanInstallSection]nsummary: rel-{0}: Clean install on {1} cloud.ndescription: This is an automatically generated stub!nn[UpgradeSection]nsummary: rel-{0}: Upgrade from {1} on {2} cloud.ndescription: This is a automatically generated stub!

The UpgradeSection specifically calls out a particular cloud type, which is hard-coded as either 'single' or 'hybrid' cloud in the script. This could be changed to some other interesting configuration for your purposes. There's a format_summary and format_description method on the Ticket class that will let you pass any number of values into those fields, so you could do something like this if you had more tokens you wanted to switch in:

summary: rel-{0}: {1} test on {2} config with {3} providerndescription: Run the {0} test suite with the {1} option enabled on {2}nn// Then process those like so:nnticket.format_summary("1.0.0", "regression", "secure", "aws")nticket.format_description("regression", "secure", "aws")

You could also add those values to the options file and iterate over them.

How It's Helping

Since this all runs inside a Docker container, it can easily be added into a CI/CD pipeline. Preprocess the options file, swap out version info or add it as needed, then build and run the new container.

For some perspective, we're using this to create about 30-odd tickets for each release we ship. Previously, we used a shared wiki page that multiple people would edit and 'mark off' what they were working on. The advantage of the Jira-based solution is that it's easier at-a-glance to see what needs to be done, what hasn't been done, and how these tasks grow or shrink over releases. I've especially found that having this information in Jira is beneficial when trying to communicate across the business and demonstrate areas where improved automation or tooling would be beneficial. It's also been useful for people who aren't attached to the release process to understand where the team is with shipping and what's left to be done.

I hope you'll find this useful in some way; When I decided to create this script, I didn't really find anything that seemed to help me do the job I wanted to do.