Set Ubuntu Linux Swapfile Using Python Fabric

Annoyingly most cloud providers have an irritating habit of not adding any swap memory to any instance you spin up. Probably because if they added swap to the instance the disk size would appear to be smaller then it is or if they had a dedicated swap partition they would have to bear the cost or again use some of your disk space.

Thankfully adding swap to your Ubuntu linux instance is fairly easy. The following task when run will check if a swapfile already exists on the host and if not create one, mount it and set it to be remounted when the instance is rebooted. It takes in a parameter which specifies the size of the swap in gigabytes.


def setup_swapfile(size=1):
    if fabric.contrib.files.exists('/swapfile') == False:
        sudo('''fallocate -l %sG /swapfile''' % (size))
        sudo('''chmod 600 /swapfile''')
        sudo('''mkswap /swapfile''')
        sudo('''swapon /swapfile''')
        sudo('''echo "/swapfile   none    swap    sw    0   0" >> /etc/fstab''')

BTW I am writing a book about how to Automate your life using Python Fabric click the link and register your interest for when it is released.

Python Fabric Set Host List at Runtime

With the advent of cloud computing where you spin up and tear down servers at will it becomes extremely useful to pick the hosts you want fabric to run on at runtime rather then through the usual env.hosts setting. This allows you to query your servers through your cloud providers API without having to maintain a list. This can be a more powerful and flexible technique then using roles and in a devops world can save you a lot of time.

The trick is to know that when fabric runs any outgoing SSH connection is only made when the first put/get/sudo/run command is made. This means you can change env.hosts before this time and target whatever machines you want.

For the example below we are going to run a command on all of our servers after getting a full IP list from our fictitious cloud provider through an API call using the excellent Python requests module.

Firstly lets define a task which will set our env.hosts to all of the servers in our cloud.


def all():
    req = requests.get('https://api.mycloud.com/v1/server/list', headers={ 'API-Key': 'MYAPIKEY' });
    serverlist = json.loads(req.text)

    if len(serverlist) == 0:
        evn.hosts = []
        return

    env.hosts = [server['public_ip'] for server in serverlist]

The above makes a HTTPS call using our API Key and loads the JSON response into an object. Then depending on if any servers exist or not we loop through pulling out the public ip address for all our servers and assign that to our environment hosts.

Now we can call it with a simple uname function to get the output for all of the servers inside our cloud.


$ fab all hostname
[box1] run: uname -s
[box1] out: box1
[box2] run: uname -s
[box2] out: box2

Done.
Disconnecting from box1... done.
Disconnecting from box2... done.

You can create individual tasks for each group of servers you control using this technique but that’s not very dry (don’t repeat yourself) or neat. Lets modify our all task to accept a parameter so we can filter down our servers at run time.


def all(server_filter=None):
    req = requests.get('https://api.mycloud.com/v1/server/list', headers={ 'API-Key': 'MYAPIKEY' });
    serverlist = json.loads(req.text)

    if len(serverlist) == 0:
        evn.hosts = []
        return
    
    if filter:
        env.hosts = [server['public_ip'] for server in serverlist if server['tag'] == server_filter]
    else:
        env.hosts = [server['public_ip'] for server in serverlist]

We changed our method to accept a parameter which is by default set to none which we can use to filter down our servers based on a tag. If your cloud providers API is sufficiently powerful you can even change the request itself to handle this use case and save yourself the effort of filtering after the return.

To call our new method you need to pass the filter like the following example.


$ fab all:linux hostname
[box1] run: uname -s
[box1] out: box1
[box2] run: uname -s
[box2] out: box2

Done.
Disconnecting from box1... done.
Disconnecting from box2... done.

BTW I am writing a book about how to Automate your life using Python Fabric click the link and register your interest for when it is released.

What is Chaos Testing / Engineering

A blog post by the excellent technical people at Netflix about Chaos Engineering and further posts about the subject by Microsoft in Azure Search prompted me to ask the question, What is chaos engineering and how can chaos testing be applied to help me?

What is Chaos Testing?

First coined by the afore mentioned Netflix blog post, chaos engineering takes the approach that regardless how encompassing your test suite is, once your code is running on enough machines and reaches enough complexity errors are going to happen. Since failure is unavoidable, why not deliberately introduce it to ensure your systems and processes can deal with the failure?

To accomplish this, Netflix created the Netflix Simian Army, which consists of a series of tools known as “monkeys” (AKA Chaos Monkey’s) that deliberately inject failure into their services and systems. Microsoft adopted a similar approach by creating their own monkey’s which were able to inject faults into their test environments.

What are the advantages of Chaos Testing?

The advantage of chaos engineering is that you can quickly smoke out issues that other testing layers cannot easily capture. This can save you a lot of downtime in the future and help design and build fault tolerant systems. For example, Netflix runs in AWS and as a response to a regional failure changed their systems to become region agnostic. The easiest way to confirm this works is to regularly take down important services in separate regions, which is all done through a chaos monkey designed to replicate this failure.

While it is possible to sit down and anticipate some of the issues you can expect when a system fails it knowing what actually happens is another thing.

The result of this is you are forced to design and build highly fault tolerant systems and to withstand massive outages with minimal downtime. Expecting your systems to not have 100% uptime and planning accordingly to avoid this can be a tremendous competitive advantage.

One thing commonly overlooked with chaos engineering is its ability to find issues caused by cascading failure. You may be confident that your application still works when the database goes down, but would you be so sure if it when down along with your caching layer?

Should I be Chaos Testing?

This really depends on what your tolerances for failure are and based on the likely hood of them happening. If you are writing desktop software chaos testing is unlikely to yield any value. Much the same applies if you are running a financial system where failures are acceptable so long as everything reconciles at the end of the day.

If however you are running large distributed systems using cloud computing (think 50 or more instances) with a variety of services and process’s designed to scale up and out injecting some chaos will potentially be very valuable.

How to start Chaos Testing?

Thankfully with cloud computing and the API’s provided it can be relatively easy to begin chaos testing. These tools by allowing you to control the infrastructure through code allow the replication of a host of errors not easily reproducible when running bare hardware. This does not mean that bare hardware systems cannot perform chaos testing, just that some classes of errors will be harder to reproduce.

Lets start by looking at the way Microsoft and Netflix classify their “monkey’s”.

Low chaos
: This refers to failures that our system can recover from gracefully with minimal or no interruption to service availability.

Medium chaos
: Are failures that can also be recovered from gracefully, but may result in degraded service performance or availability.

High chaos
: Are failures that are more catastrophic and will interrupt service availability.

Extreme chaos
: Are operations are failures that cause ungraceful degradation of the service, result in data loss, or that simply fail silently without raising alerts.

Microsoft found that by setting up a testing environment and letting the monkey’s loose that they were able to identify a variety of issues with provisioning instances and services as well as scaling them to suit. They also split the environments into periods of chaos where the monkey’s ran and dormant periods where they did not. Errors found in dormant periods were considered bugs, and flagged to be investigated and fixed. During chaos periods any low issues were also considered bugs and scheduled to be investigated and fixed. Medium issues raised low priority issues to on call staff to investigate along with high level issues. Extreme operations once identified were not run again until a fix had been introduced.

The process for fixing issues identified through this process was the following,

* Discover the issue, identify the impacts if any and determine the root cause.
* Mitigate the issue to prevent data loss or service impact in any customer facing environments
* Reproduce the error through automation
* Fix the error and verify through the previous step it will not reoccur

Once done the monkey created through the automation step could be added the the regular suite of tests ensuring that whatever issue was identified would not occur again.

Netflix uses a similar method for fixing issue, but by contrast run’s their monkey’s in their live environments rather then in a pure testing environment. They also released some information on some of the monkey’s they used to introduce failures.

Latency Monkey
: Induces artificial delays into the client-server communication layer to simulate service degradation and determine how consumers respond in this situation. By making very large delays they are able to simulate a node or even an entire service downtime. This can be useful as bringing an entire instance down can be problematic when an instance hosts multiple services and when it is not possible to do so through API’s.

Conformity Monkey / Security Monkey
: Finds instances that don’t adhere to best-practices and shuts them down. Examples for this would be checking that instances in AWS are launched into permission limited roles and if they are not shutting them down. This forces the owner of the instance to investigate and fix issues. Security monkey as an extension that performs SSL certificate validation / expiry and other security best practice checks.

Doctor Monkey
: Checks existing health checks that run on each instances to detect unhealthy instances. Unhealthy instances are removed from service.

Janitor Monkey
: Checks for unused resources and deletes or removes them.

10-18 Monkey (Localisation monkey)
: Ensures that services continue to work in different international environments by checking that languages other then the base system consisting to work

Chaos Gorilla
: Similar to Chaos Monkey, but simulates an outage of an entire Amazon availability zone.

Well hopefully that explains what Chaos Testing / Engineering is for those who were previously unsure. Feel free to contact me over twitter or via the comments for further queries or information!

searchcode server released

searchcode server the downloadable self hosted version of searchcode.com is now available. A large amount of work went into the release with a variety of improvements based on feedback from the general beta releases.

searchcode server

searchcode server has a number of advantages over searchcode.com that will eventually be back-ported in. The full list of things to check out is included below,

  • New Single Page Application UI for smooth search experience
  • Ability to split on terms so a search for “url signer” will match “UrlSigner”
  • Massively improved performance 3x in the worst case and 20x in the best
  • Configurable through UI and configuration
  • Spelling suggestion that learns from your code

A few things of note,

  • Java 8 application built using Lucene and Spark Framework
  • Designed to work any server. The test bench server is a netbook using an Intel Atom CPU and searches return in under a second
  • Scales to Gigabytes of code and thousands of repositories
  • Works on Linux, OSX and Windows

Be sure to check it out!

searchcode server
searchcode server
searchcode server

searchcode server

A month or so ago I started collection emails on searchcode.com to determine if there was enough interest in a downloadable version of searchcode. The results were overwhelmingly positive. The email list grew far beyond what I would have expected, and this was in the first month. As such I have been working in this downloadable version of searchcode which will probably be called searchcode server.

Progress has been reasonably straight forward consider that searchcode.com is written using mostly Python and searchcode server is mostly Java. The main reason for choosing Java is that I really wanted searchcode server to be a self contained application which could be downloaded and run without the configuration and setup of additional services.

At present it is surprisingly workable. You can input repositories (git only at this point) to be indexed and after a short amount of time they will be searchable via the main interface. A few screenshots are included at the end of this post for those curious.

There is still time to sign up and be one of the first to receive access. Being on the sign up list will also give you a discount when it is actually released if you need something greater than the community edition. To register your interest use the form below, or visit the searchcode server product page.

This has been released and you can now get the actual product.

Screen Shot 2015-12-29 at 1.04.54 pm

Screen Shot 2015-12-29 at 1.11.30 pm

Pi-Hole for Ubuntu 14.04

Because of the fact that I personally work for an ad supported company and that searchcode.com is currently supported via third party advertising I tend to keep an eye on the state of ad blockers on the web.

Most people probably know about adblockplus and other browser extensions however there are other ways to block ad’s on ones network. One that I had previously read about was setting up your own Bind9 server on a server and adding custom rules to block them at a DNS level. Other the last week I had been playing around with this but since I am not a bind expert I was unable to get it working in a satisfactory way.

However the following article about blocking all ads using a Raspberry Pi appeared on my radar. I don’t have a Raspberry Pi, but I did have an old netbook (Asus Eee 1000HA) lying around that I was trying to find some use for. I had previously set it up with Ubuntu 14.04 and had it running under the house running OwnCloud as a test. I thought it might be a good candidate for this sort of thing.

The install was pretty easy and as simple as following the guide on http://pi-hole.net/ It says that you need to be using Raspbian but works perfectly for me. Thankfully I have a reasonably good router (D7000 which I can highly recommend) and once the setup was done I pointed its DNS at the new server and sat back for things to start working. It did. Flawlessly.

I think the advertising industry is in for a rude shock. When these devices are as cheap as this and as simple to install its only a matter of time before they become a built in to the router itself or a plug and play.

searchcode local

I am going to copy the searchcode pitch itself below quickly before explaining it a bit further.

“searchcode offers powerful code search over billions of lines of open source code. Imagine what it could do with your private repositories.

There have been requests to offer a downloadable version of searchcode. Given enough interest a downloadable hostable version of searchcode will be offered. Register your email below to register your interest.

Note that there would be a free Community version available for all users as well as paid version offering support. Functionality would remain the same across all versions. This would be similar to how Octopus Deploy is offered.”

In short I am considering writing a hostable version of searchcode. Most likely it would consist of a Java application one could download and use to get similar results to searchcode.com itself (probably at smaller scale however).

Rather then actually commit to several months worth of work however I have put a message on searchcode asking for those interested to register their interest. If it sounds like something you would like please register.

I have no signup target numbers in mind or product costs etc… but I suspect given over 100 sign-ups I will actually go forth and implement.

I should note that this is something I have been highly resistant towards for a long time as I do not really want to get into enterprise sales cycles.

Anyway in a months time if there are enough signups I will push forward and release an initial version to those who have signed up. Any one who does will get free access on the beta list and discounts on the final version (should they need something more powerful then the community edition).

This has been released and you can now get the actual product.

Go Forth and Search

A very fast update. At the request of the excellent Lars Brinkhoff via GitHub I have added in the language Forth to be one of the supported languages inside searchcode.

An example search which shows this working would be the following https://searchcode.com/?q=forth&loc=0&loc2=10000&lan=181

I had to solve a number of interesting problems inside searchcode to support this change. For pragmatic reasons the way searchcode identifies what language any piece of code is written in is to run it though CLOC (Count Lines Of Code). Written in perl it does a reasonably good job of pulling out metadata for any given piece of code. However since my perl ability is poor at best submitting a patch to support forth was not going to be an option.

Instead I ended up adding an additional few checks at the end of the indexing pipeline to identify code that probably should have been categorised as forth and if so change the classification. It has been designed to be extensible so if other languages come up that are not currently identified it should be possible to add them as well.

The only other change of note for searchcode is that I fixed the SSL certificate chain and now you can curl the API again. This was an issue caused by Google throwing its weight around and outlawing SHA1 certificates. When updating to fix this I neglected to fix the chain as well. Oddly browsers worked without issue whereas curl and Python requests broke.

Exporting Documents from KnowledgeTree 3.7.0.2

I was recently tasked with exporting a large collection of documents from KnowledgeTree (KT) for a client. The collection was too large to use the download all functionality and too wide to attempt to export each folder individually.

I had played around with the WebDav connection that KT provides but it either didn’t work or was designed deliberately to not allow exporting of the documents.

I looked at where the documents were  stored on disk but KT stores them as numbered files in numbered directories sans extension or folder information.

Long story short I spent some time poking through the database to identify the tables which would contain the correct metadata which would allow me to rebuild the tree using a proper filesystem. For record the tables required are the following,

  • folders – Contains the folder tree. Each entry represents a folder and contains its parent folder id.
  • documents – Contains the documents that each folder contains. Knowing the folders id you can determine what documents live in that folder.
  • document_content_version – Contains the metadata required to get the actual file from disk. A 1 to 1 mapping between document id and this table is all that is required.

That said here is a short Python script which can be used to rebuild the folders and documents on disk. All that is required is to ensure that Python MySQLdb is installed and to set the database details. Depending on your KT install you may need to change the document location. Where  the script is run it will replicate the folder tree containing the documents preserving the structures, names and extensions.

Keep in mind this is a fairly ugly script abusing global variables and such. It is also not incredibly efficient, but did manage to extract 20GB of files in my case in a little under 10 minutes.

import MySQLdb
import os
import shutil

# KnowledgeTree default place to store documents
ktdocument = '/var/www/ktdms/Documents/'

conn = MySQLdb.connect(user='', passwd='',db='', charset="utf8", use_unicode=True)
cursor = conn.cursor()

# global variables FTW
cursor.execute('''select id, parent_id, name from folders;''')
allfolders = cursor.fetchall()

cursor.execute('''select id, folder_id from documents;''')
alldocuments = cursor.fetchall()

cursor.execute('''select document_id, filename, storage_path from document_content_version;''')
document_locations = cursor.fetchall()


# create folder tree which matches whatever the database suggests exists
def create_folder_tree(parent_id, path):
    directories = [x for x in allfolders if x[1] == parent_id]
    for directory in directories:
        d = '.%s/%s/' % (path, directory[2])
        
        print d
        os.makedirs(d)
        # get all the files that belong in this directory
        for document in [x for x in alldocuments if x[1] == directory[0]]:
            try:
                location = [x for x in document_locations if document[0] == x[0]][0]
                print 'copy %s%s %s%s' % (ktdocument, location[2], d, location[1])
                shutil.copy2('%s%s' % (ktdocument, location[2]), '%s%s' % (d, location[1]))
            except:
                 print 'ERROR exporting - Usually due to a linked document.'

        create_folder_tree(parent_id=directory[0], path='%s/%s' % (path, directory[2]))

create_folder_tree(parent_id=1, path='')