On data-only containers in Docker…

Sometime you just need a place to dump things…


What’s the problem?

You might read that using scratch is a bad idea for the data-only container pattern and maybe it’s true. Maybe you should use another copy of your base image to host the data. But doesn’t that break the concept of a data-only container? As a security-minded individual this strikes me as somewhat unusual to store applications with data if the point is to segregate the two. So I set out to prove that using a secure data container pattern is possible — in as few commands as possible.

Big warning

This method is a little unusual as it requires an “initialization phase” for the data container. Although understandable, it’s not perfect and may not scale well without further work.

How it’s done

Suppose you start with a Dockerfile like so…

FROM alpine
RUN adduser -D test && mkdir /foo && touch /foo/bar && chown -R test:test /foo
USER test
CMD ls -lh /foo

And you build it thusly:

docker build -t test -< Dockerfile

Let’s run this container without a data container:

docker run --rm test

Which delivers this fully expected news:

total 0
-rw-r--r--    1 test     test           0 Apr  1 12:23 bar

And you create a dummy data container from scratch:

docker create -v /foo --name test-data scratch foo

Obviously since there are no applications foo would never run, but since this is a data container that’s not a problem as we will never start it.

Now let’s run the image with our shiny new data container.

docker run --rm --volumes-from test-data test


total 0

Drat. Right, there’s nothing in that folder now because the data container is empty. Let’s put something there.

docker run --rm --volumes-from test-data test sh -c "touch /foo/bar"

Which understandably results in:

touch: /foo/bar: Permission denied

Because the data container has mounted /foo with the permissions root:root

So how do we handle this? With initialization! Remember uncle Ben, with great power comes something something… let’s do it.

docker run --rm --volumes-from test-data test sh -c "su -c 'chown -R test:test /foo' && touch /foo/bar"

Now let’s re-run our previous command:

docker run --rm --volumes-from test-data test

Correctly resulting in:

-rw-r--r--    1 test     test           0 Apr  1 13:05 bar

Security is intact! Let’s double-check that:

docker run --rm --volumes-from test-data test sh -c "touch /foo/baz && ls -lh /foo"

And now we get no errors! Also this:

total 0
-rw-r--r--    1 test     test           0 Apr  1 13:05 bar
-rw-r--r--    1 test     test           0 Apr  1 13:08 baz

Today I learned…

  • Security can be properly handled using a from scratch data container.
  • Backups of a data container can be made easily via docker commands without all the extra distro cruft.
  • If someone does manage to get into your data container there are no commands available and it really can’t be started at all.
  • It can be done!

TIL – Dockerfiles are really sensitive to OOO

The Land of Ooo

TIL – Order of operations are important!

Today I learned that order can have a profound effect on the size of the resulting images. For example my first Dockerfile contained a section something like this:

RUN curl http://www.bigfiles.com/bigfile.tgz > /opt/bigfile.tgz && \
    tar -C /opt -xzf /opt/bigfile.tgz

Which produced a ~300 MiB image.
But wait… the permissions are wrong! Let’s fix that:

RUN curl http://www.bigfiles.com/bigfile.tgz > /opt/bigfile.tgz && \
    tar -C /opt -xzf /opt/bigfile.tgz
RUN chown -R user:user /opt/bigfiledir

Wow! Suddenly we are over 500MiB! Why?

AuFS is actually storing the files twice. Resulting in an extra 200MiB just to store permissions. Yikes!

So what’s the solution?

RUN curl http://www.bigfiles.com/bigfile.tgz > /opt/bigfile.tgz && \
    tar -C /opt -xzf /opt/bigfile.tgz && \
    chown -R user:user /opt/bigfiledir

Since this all happens in one command the changes don’t get saved in an overlay layer.

My actual work was much more complicated so it took awhile to spot this, but it’s an important lesson nonetheless.

Keep your images small folks!

CoreOS + Etcd + Flannel = Pretty Cool

This is just a brief update on what I’m looking into.

I have a cluster, how do containers on different machines talk?

An easy solution is Flannel + Etcd. Here’s a simple example which will work on two machines. (I suggest using coreos-vagrant to create a two node cluster with flannel. Here’s my user-data and config.rb.)

First Machine:

CID=$(docker run -d alpine /bin/sh -c "while true; do sleep 1; done") && etcdctl set /server $(docker inspect --format '{{.NetworkSettings.IPAddress}}' $CID) && docker exec -it $CID /usr/bin/nc -l -p 9999 && docker stop --time=0 $CID && docker rm $CID

Note: Exit with Ctrl+P Control+Q (^P^Q), doing a Control+C (^C) will not stop and remove the docker container.

Second Machine

docker run --rm -it --add-host server:$(etcdctl get /server) alpine /usr/bin/nc server 9999

Now you should be able to chat between the two machines!

Meh? What back magic is this?

I’m simply using etcd to store the flannel IP of the server in the value of the /server key. On another machine we use that IP to attach netcat. The black magic is in flannel’s overlay network and etcd’s distributed key/value store.

Breaking apart the commands to the server:

CID=$(docker run -d --name server alpine /bin/sh -c "while true; do sleep 1; done")

This command sets up a docker container running the shell with a script that just sleeps. Simply running netcat doesn’t seem to give a good two-way connection in this example. The $CID part stores the docker container’s ID in a variable so I can use it later. I could have used the –name option to give a sensible name, but docker inspect doesn’t seem to accept a name.

etcdctl set /server $(docker inspect --format '{{.NetworkSettings.IPAddress}}' $CID)

Here I am setting a key in etcd so that another machine in fleet (which could be on the other side of the world) can access the flannel IP address of this server. This command makes use of the $CID variable set before (since it is random) and docker inspect’s format command to remove the formatting.

docker exec -it $CID /usr/bin/nc -l -p 9999

Executing netcat here instead of in the docker run phase is just for demonstration, normally this part would be done there. Because we are people who like to see things work, executing this command allows us to interact with the netcat server.

docker stop --time=0 $CID

Stopping the container with a zero time is analogous to killing it.

docker rm $CID

Get rid of that hanging container!

Lesson learned:

  1. Flannel works nicely
  2. Etcd is great
  3. Netcat is amazing
  4. Alpine is a wonderful ~5MiB distro for docker containers
  5. This stuff really needs a web UI! (If you know of a good one, leave a comment and I’ll blog about it.)

Deis … when PaaS becomes IaaS (for one tenant)

Deis - Bare Metal Cluster Diagram

Creating a bare metal clustered Platform as a Service

What will this get you? A cluster which auto-boots and provisions new nodes over PXE (network boot). You will still need to manually provision the first one, but after that all you need is to plug in new machines.

  • 1 computer for hosting the networking business
  • 1 computer to act as the bootstrapping node
  • 2 or more computers to make the cluster complete
  • 1 switch big enough to connect all the computers (plus the one you intend to use on that network)

Firewall / Router / DHCP / Kitchen sink…

Setting up a server to handle all the network tasks is a bit of an oversimplification but it works well for me, here’s how I did it:

  1. Install pfSense on a computer with at least two network adapters
  2. Configure one for WAN (your regular network)
  3. Configure the other for LAN (pick a range, I chose
  4. Login to the web admin page
  5. Go to System -> Packages
    1. Install [Filer, haproxy-devel, and TFTP]
  6. Go to Services -> DHCP Server
    1. Enable DHCP server
    2. Range –
    3. DNS servers (
    4. Domain name (whatever, you can use something like example.com)
    5. TFTP server
    6. Enable network booting
      1. next-server
      2. default bios filename pxelinux.0
  7. Services -> TFTP
    1. TFTP Daemon Interfaces: LAN
    2. Download the syslinux package from kernel.org
    3. Upload the following files to /tftpboot from the syslinux archive you downloaded (you will need to search for them though!) [pxelinux.0 ldlinux.c32 menu.c32 libutil.c32]
    4. Upload the CoreOS PXE boot files coreos_production_pxe.vmlinuz and coreos_production_pxe_image.cpio.gz (refer to the CoreOS PXE Boot Guide if you have trouble) to /tftpboot
    5. Use a client like Filezilla or SCP to upload the pxelinux.cfg/default to /tftpboot as the web UI can’t make directories. (Here’s mine, you will need to rename it to just default no extension)
  8. Diagnostics -> Filer
    1. Create three files:
      1. deis-node-auto-install.yml (remove .txt if you download it)
      2. deis-master-1.yml  (remove .txt if you download it)
      3. deis-node.yml  (remove .txt if you download it)
    2. For each of the files replace the [replace me] section with your public key
      1. You did make a public / private key pair right? No?
        1. ssh-keygen -q -t rsa -f ~/.ssh/deis -N ” -C deis
        2. copy the ssh-rsa and string into those files from ~/.ssh/deis.key
        3. also, you might need to run chmod 0700 ~/.ssh/deis since the permissions may be wrong
  9. Firewall -> Virtual IPs
    1. Add
      1. Type IP Alias
      2. Interface LAN
      3. IP Address(es) Type: Single address Address:
      4. Description Deis HA Proxy
  10. Services -> DNS Forwarder
    1. Enable!
    2. Register DHCP leases!
    3. Register DHCP static mappings!
    4. Interfaces: All
    5. Advanced
      1. address=/.example.com/
    6. Save!
  11. Services -> HA Proxy
    1. Backend
      1. Name deis_http
      2. Add (You will need to add an entry for each computer you add, so you’ll be back here later to add more, for now we will just add the one we know about.)
        1. Name: controller1
        2. Address:
        3. Port: 80
      3. Health check method: HTTP
      4. Http check URI: /health-check
      5. Connection timeout: 2147483647
      6. Server timeout: 2147483647
      7. Save!
    2. Backend (another one!) You can copy the one above with slight changes
      1. Name: deis_ssh
      2. For each server change the port to 2222
      3. Balance: Least Connections
      4. Health check method: Basic
      5. Save!
    3. Frontend
      1. Name: deis_http
      2. External address: (Deis HAProxy)
      3. Port 80
      4. Backend server pool: deis)http
      5. Type: HTTP/HTTPS(offloading)
      6. Client timeout: 2147483647
      7. Use ‘forwardfor’ option: checked!
      8. Save!
    4. Frontend (again, and yes you can copy with slight changes)
      1. Name: deis_ssh
      2. Port: 2222
      3. Backend server pool: deis_ssh
      4. Type: TCP
      5. Save!

I bet you’re done with pfSense by now. Me too. But you’re almost done!! The rest is pretty easy.

Faking out CoreOS (speeds up installs on multiple machines)

  1. Attach with Filezilla or another client to and login as root / pfsense (unless you changed the password then use that one)
  2. Change to /usr/local/www
  3. Create a directory called current
  4. Upload the following two files:
    1. coreos_production_image.bin.bz2
    2. coreos_production_image.bin.bz2.sig

Note: If you want (and you will) to add VPN, save yourself some massive headaches and just follow this guide.

Your first node… (aww how cute)

Boot your first machine with the network card and you should see a boot menu appear with two options. Pick “Live Deis CoreOS Node (Master #1)” When it’s booted into the console issue the following commands:

  1. curl > config
  2. sudo coreos-install -d /dev/sda -b -c config -V current
  3. sudo reboot

Your first node is now ready!

To N nodes and beyond!

Now just allow the subsequent machines to boot in PXE mode and the default option will automatically install and reboot your machine into the cluster.

Did you survive? Did I make some omission? Did you notice the totally bogus way I handled the network?

Post a comment! I’d love to hear from you!

Roslyn… Excellent!



(Blog post and mini tutorial)

So Microsoft is working on this amazing new compiler for C# and VB.NET code named “Rosyln” which will make you, your dog, your parents, and the world happy. Why? Is it faster or stronger or something? Well, yes but that’s pretty normal for new software. The news here is compiler APIs. Let me say it again…

Compiler APIs.

Okay, let that sink in. What it is useful for? Everything from re-factoring code on the fly to powering the new intellisense in Visual Studio 2013 to whatever.

Show me

Oh, so you want an example, eh? Here’s your example: Suppose you need to re-factor some code and you’re using a regular expression to parse and replace code fragments. But what if you run into some newer code that uses a “var” for example. What will you do? You could come up with a massively complex way of handling that scenario or you could use Roslyn. Because Roslyn exposes the compiler as an API you have access to the semantic tree and all the metadata that tree exposes.

So, ye of little faith, here’s the code already:

using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Roslyn.Compilers.CSharp;
namespace Roslyn
    class Program
        static void Main(string[] args)
            SyntaxTree tree = SyntaxTree.ParseText(
@"using System;
using System.Collections;
using System.Linq;
using System.Text;
namespace HelloWorld
    class Program
        static void Main(string[] args)
            var helloWorld = ""Hello, World!"";
            string xxxx = ""Happy!"";
            var compilation = Compilation.Create("Program",
                new CompilationOptions(Compilers.OutputKind.ConsoleApplication),
                new SyntaxTree[] { tree });
            var root = tree.GetRoot();
            var newRoot = new Rewriter(compilation.GetSemanticModel(tree)).Visit(root);
            var result = newRoot.ToFullString();
    class Rewriter : SyntaxRewriter
        private readonly SemanticModel _semanticModel;
        public Rewriter(SemanticModel semanticModel)
            _semanticModel = semanticModel;
        public override SyntaxNode VisitLocalDeclarationStatement(LocalDeclarationStatementSyntax node)
			if (node.Declaration.Variables.Count &gt; 1) return node;
			if (node.Declaration.Variables[0].Initializer == null) return node;
			if (node.Declaration.Type.IsVar == true)
				var info = _semanticModel.GetTypeInfo(node.Declaration.Type);
				var result = node.WithDeclaration(Syntax.VariableDeclaration(
				return result;
			return node;

Yes it’s a bit complicated but really look at it and you’ll see how amazingly simple it really is. This needs not only .NET 4.5 (Visual Studio 2012 or higher) but also the Roslyn CTP which you can get from NuGet very easily but running this in your NuGet Package Manager Console (you do have that turned on, right?)

NuGetPackageManager NuGetInstallRoslyn

Now compile, run, and examine the contents of result. You will find the var has been replaced by “String” which is what it should be. If you change the value of the “helloWorld” var in the sample code to a 1, and re-run you should see the var replaced by an Int32.

Awesome. Enjoy your trek into the wonderful world of Roslyn.

Azure IaaS is awesome but…


Azure IaaS (Infrastructure as a Service) is so unbelievably cool. You can make new VMs and delete them… wait. You can delete them, right? Well yes and no. You can delete the VM but for some unknown reason the disk image storage blob stays locked for … I don’t know. Forever? It’s very frustrating to me. In fact, it’s the only frustration thing I have found about Azure (aside from only being allowed to have one NIC per VM). So I did what I always do, wrote a tool. And that was great, but it didn’t benefit you, the reader. So now you can benefit! Here’s the code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
namespace ConsoleApplication1
    public static class AzureStorage
        public static void BreakContainerLease(string AccountName, string AccountKey, string Container)
            var account = new CloudStorageAccount(new StorageCredentialsAccountAndKey(AccountName, AccountKey), true);
            var client = new CloudBlobClient(account.BlobEndpoint, account.Credentials);
            var container = client.GetContainerReference(Container);
            container.BreakLease(new TimeSpan(0, 0, 1));
        public static void BreakBlobLease(string AccountName, string AccountKey, string Container, string BlobName)
            var account = new CloudStorageAccount(new StorageCredentialsAccountAndKey(AccountName, AccountKey), true);
            var client = new CloudBlobClient(account.BlobEndpoint, account.Credentials);
            var container = client.GetContainerReference(Container);
            var blob = container.GetBlobReference(BlobName);
            blob.BreakLease(new TimeSpan(0, 0, 1));

And as always, here’s the link to the running SSL encrypted version on my site: Azure Blob Unlocker!!!!

Reflexil Is Awesome, But…


Reflexil together with .NET Reflector is pretty much the most awesome thing ever for getting someone else’s closed source thing to work properly. It’s simple: open the assembly in Reflector, edit the badness using Reflexil and save as a new assembly. Include said new assembly in your project and win!


Sometimes thar be dragons… like when the author signed it. Which is why Reflexil has the very cool ability to strip out the assembly signing (strong name) and make it a normal assembly.

This part gave me some issues…

Sometimes when referencing the “fixed” assembly you will get this:



Basically there was some weird problem, so the easy way around it is to launch your friendly neighborhood Visual Studio Command Prompt and issue some pretty simple commands:

ildasm Patched.dll /output:Patched_And_Fixed.il
ilasm /DLL Patched_And_Fixed.il

Now you’ll have Patched_And_Fixed.dll which will work!

Thanks goes out to the guys behind all these great tools to make our lives easier!

Find an iPhone with C#



This has happened to me a few times, has it happened to you? My wife asks, nay orders me, to call her at a certain time and invariably out of politeness has switched her phone to vibrate and cannot hear me call no matter how many times I try. So she has the really nice iPhone 4 and I have whatever my place of employment has given me (not iPhone or Android) so my options are limited if I don’t have a computer. Well, you can say goodbye to that!

With the help of Fiddler I was able to track down the API calls that iCloud makes to ping a phone, so now I can make a simple web page that will ping her phone and all I need is my Apple ID, Password, and the name of the Phone / iMac / iPad / iPod / iGiveUp. It’s even in a nice static class for you so you can use it anywhere. Enjoy!


using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Web.Script.Serialization;
namespace ConsoleApplication1
  public static class iCloud
    const string iCloudUrl = "https://www.icloud.com";
    const string iCloudLoginUrl = "https://setup.icloud.com/setup/ws/1/login";
    const string iCloudPlaySoundUrl = "https://p03-fmipweb.icloud.com/fmipservice/client/web/playSound";
    const string iCloudInitClientUrl = "https://p03-fmipweb.icloud.com/fmipservice/client/web/initClient";
    public static void Ping(string appleId, string password, string deviceName)
      WebClient wc = new WebClient();
      string authCookies = string.Empty;
      wc.Headers.Add("Origin", iCloudUrl);
      wc.Headers.Add("Content-Type", "text/plain");
      wc.PostDataToWebsite(iCloudLoginUrl, string.Format(
        appleId, password));
      if (wc.ResponseHeaders.AllKeys.Any(k =&gt; k == "Set-Cookie"))
        wc.Headers.Add("Cookie", wc.ResponseHeaders["Set-Cookie"]);
        throw new System.Security.SecurityException("Invalid username / password");
      var jsonString = wc.PostDataToWebsite(iCloudInitClientUrl,
        "{\"clientContext\":{\"appName\":\"iCloud Find (Web)\",\"appVersion\":\"2.0\"," +
      if (jsonString.StartsWith("{\"statusCode\":\"200\""))
        var js = new JavaScriptSerializer();
        var response = js.Deserialize(jsonString, typeof(object)) as dynamic;
        var content = response["content"];
        foreach (Dictionary&lt;string,object&gt; o in content)
          if (o.Values.Contains(deviceName))
            var psResult = wc.PostDataToWebsite(iCloudPlaySoundUrl, string.Format(
              "{{\"device\":\"{0}\",\"subject\":\"Find My iPhone Alert\"}}", o["id"]));

You will also need my extension method for WebClient.PostDataToWebsite here:

public static class Extensions
  public static string PostDataToWebsite(this WebClient wc, string url, string postData)
    var result = string.Empty;
    wc.Encoding = System.Text.Encoding.UTF8;
    wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
    result = wc.UploadString(url, "POST", postData);
    return result;

And the brave among you will be able to try it out via this secure page: iCloud Pinger (SSL Encrypted & I don’t log your credentials).

Impersonate a Windows Identity Easily!

Not Me

Why would you want to do Windows Identity impersonation?

Because it’s cool. Really. You can run a program as someone other than who you are. This is especially useful for services, websites, and pretty much anything where you want to run an application in a mode that has different privileges. Do you want more permissions than normal? Use an application in a domain? Lock it down? The sky is the limit!

How is it done?

That part is pretty simple. In fact it’s outlined on this MSDN page. I’ve also got it wrapped up in a much easier to use class here:

using System;
using System.Runtime.ConstrainedExecution;
using System.Runtime.InteropServices;
using System.Security;
using System.Security.Permissions;
using System.Security.Principal;
using Microsoft.Win32.SafeHandles;
namespace ConsoleApplication1
    /// Impersonates a windows identity.
    /// Based on: http://msdn.microsoft.com/en-us/library/w070t6ka.aspx
    public class WindowsIdentityImpersonator : IDisposable
        WindowsIdentity _newId;
        SafeTokenHandle _safeTokenHandle;
        WindowsImpersonationContext _impersonatedUser;
        public WindowsIdentity Identity { get { return _newId; } }
        [PermissionSetAttribute(SecurityAction.Demand, Name = "FullTrust")]
        public WindowsIdentityImpersonator(string Domain, string Username, string Password)
            bool returnValue = LogonUser(Username, Domain, Password, 2, 0, out _safeTokenHandle);
            if (returnValue == false)
                throw new UnauthorizedAccessException("Could not login as " + Domain + "\\" + Username + ".",
                    new System.ComponentModel.Win32Exception(Marshal.GetLastWin32Error()));
        public void BeginImpersonate()
            _newId = new WindowsIdentity(_safeTokenHandle.DangerousGetHandle());
            _impersonatedUser = _newId.Impersonate();
        public void EndImpersonate()
            if (_newId != null)
            if (_impersonatedUser != null)
        public void Dispose()
            if (_safeTokenHandle != null)
        [DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword,
            int dwLogonType, int dwLogonProvider, out SafeTokenHandle phToken);
        [DllImport("kernel32.dll", CharSet = CharSet.Auto)]
        public extern static bool CloseHandle(IntPtr handle);
    public sealed class SafeTokenHandle : SafeHandleZeroOrMinusOneIsInvalid
        private SafeTokenHandle() : base(true)
        [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
        [return: MarshalAs(UnmanagedType.Bool)]
        private static extern bool CloseHandle(IntPtr handle);
        protected override bool ReleaseHandle()
            return CloseHandle(handle);

Using this class is simple:

using (var wim = new WindowsIdentityImpersonator("domain", "username", "password"))
        Console.WriteLine("Thread B: {0}", WindowsIdentity.GetCurrent().Name);

For domain if you want to use your local machine just put in “.”

Ok, that’s cool but what else have you got?

You want more? Okay wise guy, throw this bad boy in a separate thread (even a ThreadPool will work) and you can run that thread under this identity. Want another one? Start another thread. It’s that easy! One application can perform actions under a hundred different identities!

Happy coding!

Cache Control for Fun and Profit


So you have a website and it’s really important. Maybe people are using it publicly and entering credit cards or something. What’s the worst thing that can happen? Well I’ll tell you, the next guy could come up and hit the back button and see everyone’s credit card numbers. Think it’s not plausible? I once went to the DMV near me and found a kiosk to renew my driver’s license. It was just a computer hooked up to the internet in a very poor kiosk mode. So the first thing I did was hit the unmarked backspace key and saw the last guy’s credit card number. Why did that happen? Improper cache-control!!! Sorry, that just came out.

I digress.

So you how you do it in .NET? I mean, we want to cache some things, but not others. Well it’s pretty easy, but in practice there are a few tricks. First, we want to cache static content (pictures, JavaScript files, etc) but we DO NOT want to cache our pages. So let’s setup static content caching in our web.config at the configuration level:

  <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" />

That was easy enough, and it works! But we still have some other content (the content we want secured) marked as “private” in the caching, that’s bad. So we’ll fix that by using the powers of Global.asax.cs!

protected void Application_EndRequest(Object sender, EventArgs e)
  if ((Response.Headers["Cache-Control"] ?? "") == "private")
    // Stop Caching in IE
    // Stop Caching in Firefox

So we are now checking for things that are marked “private” cache and making them expire immediately. But why am I not using Response.CacheControl? It turns out that Response.CacheControl is not as responsive as I would like:

Not the same

As you can see, Response.CacheControl thinks the static content is “private” where in reality it is “max-age”. Boo. But by directly checking the Response.Headers we will find the correct value and can check for it. This results in epic win!