Monday, 25 January 2016

How to train your Docker using voice recognition

























I spent last weekend playing about with some voice recognition tools.  There are lots out there but PocketSphinx seemed pretty cool.  The plan was to get PocketSphinx running in a container and use the voice decoding to start and stop another container using the commands such as "docker start chrome" and "docker stop chrome".  Similar to "Ok Google" and "Siri", I wanted to control containers with the power of speech ... Ok Docker.

The first few hours was spent installing dependencies, tweaking knobs and blowing whistles trying to get the mic to work.  The build steps are all documented in the Dockerfile, which can be found here.  The usage can also be found in the Dockerfile.

There is one python script called okdocker.py which can be used as an entrypoint or run manually inside the container for debugging.

When the okdocker container is run we share the sound device (/dev/snd) into the container.

Demo :

docker run -it --privileged --device /dev/snd -v `pwd`/wav:/opt/okdocker/wav --group-add audio thshaw/okdocker --demo

** If you copy and paste the line above be sure to check it is copied intact **

You will be prompted to record some speech for 3 seconds.  This will then be decoded and the text will be output on screen. The okdocker image has the US English language model included.

The recognition accuracy is quite poor at the moment but the plan is to train the language model to recognise my Northern Irish accent. This may take a number of weeks/years since the human ear, evolved over millions of years, still doesn't understand the Northern Irish accent.

I'll keep updating the source over the next few weekends and hopefully have more accurate decoding which can be pattern matched to actual Docker commands. If anyone wants to expand on this and maybe even present a working prototype at a Docker Meetup or DockerCon 2016 then that would be fantastic.

Sample Output :

docker run -it --privileged --device /dev/snd -v `pwd`/wav:/opt/okdocker/wav --group-add audio thshaw/okdocker --usage

==============================
Ok Docker (Version : 0.1)
==============================


    Command Line Usage :

        ./okdocker.py --option <argument>

    Options :

record < .wav filename >
playback < .wav filename >
decode < .wav filename >

        demo (Recording and decoding demo)







Saturday, 24 October 2015

Never miss a DART again ... conditions apply

Just a short post about this handy little script one of my co-workers (Tim Czerniak) wrote :

https://github.com/timczerniak/dart

This post will only be of interest if you meet the following criteria :


  • You live in Dublin
  • You use the DART (Dublin Area Rapid Transport)
  • You use Ubuntu for your main OS
  • You have Docker installed

If you don't meet all of these criteria then you might as well move along ... 

For everyone/anyone who is still here then the following post is about running the Dart script inside a container and displaying the information using Conky on your desktop.  

I personally find this useful during working hours as I can keep an eye on when to leave without opening a terminal or browser.

Source can be found here :


3 Steps :

Update the conkyrc file
Build the Docker image
Run the container

Looks like this when running : 



That's it.  Might be of interest to someone.

Saturday, 10 October 2015

Putting the Joy(ent) back into Docker

I'm currently working for Demonware (Activision) as a Build Engineer.  The following comments are my own.

I've been using Docker daily since version 0.6.  The speed, lightweight nature and simplicity drew me in from the very first docker run.  Running Docker containers on baremetal was a joy.  Simple to setup, simple to debug and simple to upgrade.

2 years later and while I still use Docker daily this "joy" is fading.  As I move towards running containers in various public clouds more and more of the day is being spent within VMs and building AMIs.  Running containers in VMs seems to be the most common and widely documented way to get containers running in Production.  Public cloud providers are tweaking their infrastructure to allow containers to run in VMs.  This is great for additional isolation but we lose some of the key benefits of Docker.  Performance in particular.  It feels wrong.

At the October Docker Meetup in Dublin we had 2 special guests, Tom Barlow from Docker and Casey Bisson from Joyent.  Both gave great talks but it was something that Casey said that prompted this post.  It was the "pureness" of Docker that attracted me in version 0.6.  The predictability and minimal layers of abstraction meant that any unusal behaviour was quickly resolved by a reinstall of Docker.  This has become more difficult when running Docker in a cloud.  There are alot more variables in play now.

As mentioned above, public cloud providers are enabling Docker containers to run on their infrastructure which is fantastic for Docker adoption but not necessarily for the future of containerization.  There is one company in particular that has designed their infrastructure around running containers natively.  Based on technologies such as OpenSolaris, Zones, Crossbow and ZFS. Joyent are, in my opinion, years ahead of the competition.

I signed up for a Joyent account today and within 5 minutes I was running containers on baremetal within the Joyent datacenters.  The setup was as simple as that initial Docker install back in September 2013.  The experience was as pure as the first docker run command and most importantly it has put the joy back into spinning up containers on baremetal.

I'm not affiliated with Joyent in any way and have no idea how much todays experiments cost but removing the VM layer and running containers natively, as they deserve, is almost priceless.  No more shoehorning containers into VMs just to get them running in the cloud.

Many companies are currently evaluating running Docker in production.  If you find yourself in this situation then I would highly recommend trying Joyent.

Thursday, 27 August 2015

Running Unity3d experimental build in Docker

On August 26th 2015 an experimental Unity build was released for Linux.  Details can be found here :
http://blogs.unity3d.com/2015/08/26/unity-comes-to-linux-experimental-build-now-available/

This was big news around the Demonware office and is also big news for game developers in general.

Unity3D is a powerful cross-platform 3D engine and a user friendly development environment. Easy enough for the beginner and powerful enough for the expert; Unity should interest anybody who wants to easily create 3D games and applications for mobile, desktop, the web, and consoles.

This is still a work in progress but here are some notes on running unity3d inside a docker container with a few little caveats.

Dockerfile and README.md can be found here :

https://github.com/tommyoshaw/unity3d

I'll debug the caveats over the weekend and update this blog post with more details.

If you have issues building the image you can pull from the Docker Hub :

docker pull thshaw/unity3d

Good luck.











Saturday, 22 August 2015

Using Docker Compose to setup Elasticsearch, Kibana and Packetbeat

This post is based on work done by Alex : http://agonzalezro.github.io/log-your-docker-containers-from-a-container-with-packetbeat.html

I started looking at packetbeat earlier today and wanted to use docker-compose to simplify the setup of Elasticsearch, Kibana and Packetbeat.

The beauty of Docker is that I know nothing about Elasticsearch, Kibana or packetbeat but within a few minutes it is up and running and it's play time :)

The code for this blog can be found here :



Firstly ensure you have docker and docker-compose installed.

Next up, grab the code and run docker-compose.
  1. git clone https://github.com/tommyoshaw/packetbeat.git
  2. cd packetbeat
  3. docker-compose up -d
Output :

Creating packetbeat_test_1...
Creating elasticsearch...
Creating kibana...
Creating packetbeat...

That's it.

So what just happened?  We now have 4 containers running.  

To verify this run :

docker-compose ps

Output :

      Name                     Command               State                        Ports                      
------------------------------------------------------------------------------------------------------------
elasticsearch       /docker-entrypoint.sh elas ...   Up       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp 
kibana              /docker-entrypoint.sh kibana     Up       0.0.0.0:5601->5601/tcp                         
packetbeat          /bin/bash -c                     Up                                                      
                        echo Wai ...                                                                         
packetbeat_test_1   bash -c apt-get update           Exit 0   


The Elasticsearch container is started first.  The Kibana container is started second and linked to elasticsearch.  The packetbeat container then waits for Kibana to become available before running the packetbeat script.  This is a crude netcat command but it works for this example.

The packetbeat_test container just puts some test data into elasticsearch to ensure everything is working.

Open a browser and go to : http://localhost:5601

Set the index pattern to : packetbeat-*

If no items are returned after entering this pattern it means the packetbeat_test container ran too soon.  To rerun the test container just run :

docker-compose up -d test

Go back to the browser, update the pattern to packetbeat-* and the data packets will be available.

Click on the "Discovery" tab and you will see details of the data packets from the packetbeat_test container.

There are lots of cool filters, visualizations and search capabilities available.  That's as far as I've got. 

One last thing.  You can use the docker-compose scale option to populate elasticsearch with alot more data.  Worth noting that if you are in Starbucks using your mobile internet and you scale to 100 test containers that run apt-get install then you very quickly hit your data limit. 

This command will start 20 test containers, each running apt-get update.

docker-compose scale test=20

Friday, 31 July 2015

Running Android apps in Docker using Googles ARC Welder

Firstly what is ARC Welder ?

It's a App runtime that allows you to run Android Apps in Chrome.  It is still in beta but looks really promising.  More details here : https://developer.chrome.com/apps/getstarted_arc

Installing the ARC Welder app is simple.  Just search the Chrome Webstore and add it to the browser.  Within minutes you can import a *.apk file for your favorite app and start playing about with it.  Cool.

There is a limitation though.  You can only load one Android app at a time.  Docker to the rescue. You can install Chrome and ARC Welder inside a container.  Each container can then be used for different Android Apps.

This may be handy for developers or QA folk who wish to run multiple versions of an app.

We also get the other benefits of Docker :

  1. Each app is isolated
  2. No modification needed on your host
  3. Easy to scale 
There are lots of ways to run Android apps for debugging purposes but this is the simplest I've come across.

To try this out you can run : 

docker run -it --net host --cpuset-cpus 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads --device /dev/snd --name arcwelder thshaw/arc-welder

You can easily start multiple arc-welder containers, one for each app  :

 docker run -it --net host --cpuset-cpus 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads --device /dev/snd --name instagram thshaw/arc-welder

docker run -it --net host --cpuset-cpus 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads --device /dev/snd --name evernote thshaw/arc-welder

docker run -it --net host --cpuset-cpus 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads --device /dev/snd --name angrybirds thshaw/arc-welder


More details can be found here : 

 

Docker Hub : https://registry.hub.docker.com/u/thshaw/arc-welder/

 

Github : https://github.com/tommyoshaw/arc-welder

 

Demo : Multiple Android Apps in containers
 

Demo : Angry Birds running on ARC Welder in Docker

video



Saturday, 25 July 2015

Dockerizing Evernote

Running Evernote in Docker using wine

There are a number of alternatives to accessing Evernote on Linux like Nevernote, Geeknote and Everpad.
As a long time user of Evernote and a Docker enthusiast it made sense to combine the two.
Disclaimer : I am not a power user of Evernote so people may find issues with webcam and sound functionality.

Usage :

docker run -d -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY –name evernote thshaw/evernote
** If you copy and paste the above please be sure to replace the -name with --name **
The first time this runs it will run the Evernote setup. Just login and the sync will start.
To stop the container :
docker stop evernote
To start the container :
docker start evernote

Versions
Ubuntu : 14:04

Evernote : 5.8.13

The Evernote image is built using : https://registry.hub.docker.com/u/thshaw/evernote/dockerfile/
Wine : 1.7.4