Explaining VarnishHist – What Does it Tell Us

The varnishhist is one of the most underused varnish tools that come with your standard varnish install. Probably because of how it appears at first glance.

In short, you want as many | symbols as possible and you want everything far toward the left hand side. The close to the left the faster the responses are cached or otherwise. The more | symbols then more items were served from cache.

A small guide,

'|' is cache HIT
'#' is cache MISS
'n:m' numbers in left top corner is vertical scale
'n = 2000' is number of requests that are being displayed (from 1 to 2000)

The X-axis is logarithmic time between request request from kernel to Varnish and response from Varnish to kernel.

The times on the X-axis are as such,

1e1 = 10 sec
1e0 = 1 sec
1e-1 = 0.1 secs or 100 ms (milliseconds)
1e-2 = 0.01 secs or 10 ms
1e-3 = 0.001 secs or 1 ms or 1000 µs (microseconds)
1e-4 = 0.0001 secs or 0.1 ms or 100 µs
1e-5 = 0.00001 secs or 0.01 ms or 10 µs
1e-6 = 0.000001 secs or 0.001 ms or 1 µs or 1000 ns (nanoseconds)

Below is the varnishhist for searchcode.com showing that while most responses are served in about 100ms not many are cached. This can mean one of a few things.

  • The responses are not cache-able and you need to adjust the back-end responses to have the correct headers (or override the settings with VCL config).
  • The cache timeout for the back-end responses isn’t high enough to ensure that later requests are served from cache.
  • There isn’t a large enough cache to hold all the responses (that’s the problem in this case).
1:20, n = 2000

            |                    ####
            |                    ####
            |                    ####
            |                    #####
            |                    #####
            |                    #####
            |                   #######
            |                   #######
            ||  |    #      #   ##########
|1e-6  |1e-5  |1e-4  |1e-3  |1e-2  |1e-1  |1e0   |1e1   |1e2

MySQL Dump Without Impacting Queries

Posted more for my personal use (I have to look it up every time) but here is how to run a mysqldump without impacting performance on the box. It sets the ionice and nice values to be as low as possible (but still run) and uses a single transaction and ups the max packet size for MySQL.

ionice -c2 -n7 nice -n19 mysqldump -u root -p DATABASE --single-transaction --max_allowed_packet=512M > FILENAME

To all Companies Currently Recruiting

I am writing this on behalf of all developers/engineers out there. Please stop with the take home coding challenge questions. Really. Just stop it. They are a lazy and frankly an unprofessional way of sorting the wheat from the chaff. Before closing your browser in disgust hear me out on this one and hopefully I can convince you of the error of your ways.

There has become an alarming trend these days of companies during the hiring process to issue lengthy coding challenges in order to prove that the individual they are hiring knows their stuff. I totally understand why you might be doing this but frankly its flawed. Lets go through several reasons why.

1. It shows that you have a lack of regard for the individuals time. This is a serious red flag. After all if you fail to respect my time why should I respect yours? Consider it this way, for every hour of time that your test takes you are taking away 1 hour of that persons family/rest/sleep/eating/hobby time. What is your commitment in this situation? If you aren’t willing to invest the time in the hiring process then why should we? Sure you need to review the code (and hopefully give feedback) but that usually takes less than 15 minutes (yes I have done this and you can do it very quickly). The time investment balance is horribly skewed in your favour, which starts the relationship off to a bad start. Lets also consider those who do consulting or contracting outside. If you ask them to do a 3 hour coding challenge then you are asking them to forgo somewhere in the region of $300-700. Even worse is that usually at the end of these tests all you get back is “Pass” or “Fail”. Even when you do give some feedback there is no way it pays back the amount of time that went in.

2. Long tests are irrelevant anyway. When you set someone up to solve a problem that takes several hours you are setting them up to fail. The test you set is usually set up such that you are looking for a specific answer and only that answer. It might be obvious to you that using the strategy pattern is the best way to solve your test, but what about those who’s development backgrounds require business logic rules to be user editable and hence live in a database where it can be modified? You are also going to nitpick over every little detail in a classic case of bike shedding. “They didn’t clean up some spaces!” “They didn’t comment this method!” “They did comment this method!” Even aware of this problem I still find myself falling into the trap of nitpicking over the small stuff that usually doesn’t matter. You are going to be investing time in training them how you want things anyway so why expect them to know it without even working with you?

3. It’s going to drive away talented individuals. I am not counting myself in this category but I know of many talented dev’s who simply refuse to do any sort of test. Why? They have a huge online portfolios of work they have previously done. Why do you even ask for our Github/Gitlab profiles if not to look at them? Surely 5 minutes of looking though would let you know if this person is a pretender or knows their stuff.

4. The hiring process is skewed towards the hirer. People looking for work are usually in one of two situations. Either they have a role and are looking around or they don’t have one. Lets consider the risk for both groups.

For those with a job they will need to resign from their current position to take an opportunity with you. There is usually a probation period following the hiring that either party can say “No this isn’t for me lets agree to be friends”. The hiree is the one taking on the largest risk in this situation. They have already left what possibly was a stable situation for an unstable situation. The company takes on the risk of potentially investing in someone and they leave. Realistically though the company is in the position to hire so renumeration is the least of their problems. For the hiree though the potential is that they join and after a month get the boot. They are now without income and back to looking for a job, only now they are in the latter category.

For those without a job, they have the situation that they might find employment and in doing so passing on other opportunities only to get the boot after a few months. Once again its skewed towards the company.

5. They are easily gamed. A quick check on freelancer or other such sites shows that I can probably buy a solution to any coding challenge you have for less than $300. For a job with a steady pay-check even if its only for a few months (before you find out I can’t code a damn) this is a no brainer especially if I know my coding chops aren’t up to standard. But why pay at all? A lot of the solutions are posted on online forums and Github already. Heck just ask your best mate who happens to know their stuff to do it for you. Whats the cost to the organisation here? Well assuming you do hire me based on my fraudulent test I could lurk in the company for months, either contributing nothing or perhaps causing all sorts of damage. At the very least you have paid several months salary.

A-hah you say! I can defeat what you have written above! I will just have another test during the interview process I can hear you thinking. DING DING DING WINNER! However, if you are going to do that anyway why not drop the initial challenge? After all what are you gaining?

Here’s what I propose. Bring the person you want to interview in and have them actually code on a machine in front of you. Work through a few simple problems together. It dons’t need to be complex. Ask them to write a function that reverses all the words in a string but not the string itself. Ask them how to find elements that existing in list A that are not in B. Ask them why they implemented certain patterns, how did they decide on data types, why did they comment/not comment a method. Let them know in advance that this is what is going to happen and what will be expected. You will learn far more in this short investment of time then with any coding challenge. If you want to get more in depth then why not offer to pay them for a single day to come in and actually work. The monetary cost will be less than making the wrong hire and you both get to decide if things are going to work out.

The best interview process I have been with to date was actually the Microsoft one for a intern role many many moons ago. It involved several hours of interviews with different individuals discussing different technology roles to try and ascertain the best fit.

When I walked out (I didn’t get the role BTW) I felt like a better person. Not only were the discussions interesting, I learnt a lot from those conducting them and I felt like my time was valued. They even offered to cover my travel expenses which made me feel like they cared about my time investment. This is how the process is meant to work. Its as much about the person being interviewed as about the company. Consider it an investment in your advertising budget if you are tracking the time investment (yes it is an investment!) as a cost. Good interviews stick with people a long time and GREAT ones make those people want to praise your company.

GPL Time-bomb an interesting approach to #FOSS licensing

UPDATES Following some feedback I am going to rename my usage of “Time-Bomb” due to potential negative connotation on the words. I am going to call it “Eventually Open”. Also a few other things need mentioning. I am not looking for code submissions back into the source at this time. This was a move to show that there are no back-doors in the code sending source code back to a master server.

About a week ago I released searchcode server under the fair source licence. From day one I had wanted to release it using some form of licence where the code was available but I wanted to lock it somewhat because frankly I do want to make some money out of my time investment. That’s not the whole story however. I did not want to create another “Look but don’t touch” situation forever and I certainly didn’t want searchcode to be constrained by a licence in the event that I die, lose interest or stop updating the code.

The result of this was that I have added what I am going to call a GPL Time-Bomb into into the licencing of searchcode server. Here is how it works. After a specified period of time the current version of searchcode server can be re-licensed under the GPL v3. This is a shifting date such that each new release extends its own time-bomb further into the future. However the older releases time is still fixed. The time-bomb for version 1.2.3 and 1.2.4 takes place on the 27th August 2019 at which point you can take the source using GPL 3.0. Assuming searchcode server 6.1.2 comes out at roughly the same time its time-bomb will be set to the 27th of August 2022 but the 1.2.3 release will be unaffected.

In short I have put a time limit of 3 years to make money out of the product and if I am unable it is turned over to the world to use as they see fit. Even better, assuming searchcode server becomes a successful product I will be forced to continually improve it and upgrade if I want to keep a for sale version without there being an equivalent FOSS version around (which in theory could be maintained by the community). In short everyone wins from this arrangement, and I am not forced to rely on a support model to pay the bills which frankly only works when you have a large sales team.

Here’s hoping this sort of licencing catches on as there are so many products out there that could benefit from it. If they take off the creators have an incentive to maintain and not milk their creation and those that become abandoned even up available for public use which I feel is a really fair way of licencing software.

Agree? Disagree? Email me or hit me up on twitter.

searchcode server under fair source

A very quick blog today. I have released searchcode server under the fair source licence. This means that as of a few days ago you can view the source, change it modify it and run it as you see fit so long as you have less than 5 users.

The source is hosted on github (I may move this to GitLab sometime in the future) and you can view it here.

So what does this mean? Well the community edition still exists (run searchcode with as many users as you want) as do the paid versions with support and all the full features. The real advantage however is that you can now vet the source code to ensure that searchcode server is not secretly sending your most valuable asset to some hidden server somewhere. In addition it means I can now talk about the source openly and will be writing some posts about how I ran into some CPU branching issues which slowed down some code.

Good news all around then. Be sure to check out the source and let me know what you think.

The Worst Individual I Ever Worked With

Taken from a comment I posted on HN in a thread about a Soccer Con Man.

Not actually a programmer. The guy was hired to be a project manager.

After joining things were as expected but after a few weeks we noticed that he was rarely around after lunch and never around after lunch on a Friday.

We would email him at those times deliberately to catch him out and I recall starting to put sticky notes on his laptop “Came to see you a X time”. He would come back and just dump all the notes in the rubbish and claim he never got them. He would often claim to be working from home, despite his laptop being on his desk and usually closed. He would also never responding during those times to email or IM.

A classic seagull manager he would appear when something went wrong, making a lot of noise, writing a lot of emails and then vanishing. He would also be sure to be seen when something was delivered often staying back late on those times.

It got so bad one friend of mine started tracking when he was around and then tracking when one of his relatives died. During his tenure the following incidents occurred,

– Hot water system blew up. 4 times. He had pictures which he would show all the time.

– Uncles, aunts, and various over family members died to the total of 20 individuals.

– Our time tracking him showed him to be in the office less than 15 hours a week on average.

We started to suspect he had a second job and was pulling the same con on them. This was never proved, but we did find someone who had worked with him previously and they reported the same behavior.

The worst thing was it was raised with management at least several dozen times and nothing ever happened. He managed to pull this scam off for 4 years. I could not believe the waste of money this guy was, literally $500,000 burnt on a useless individual.

Types of Testing in Software Engineering

There are many different types of testing which exist in software engineering. They should not be confused with the test levels, unit testing, integration testing, component interface testing, and system testing. However the different test levels may be used by each type as a way of checking for software quality.

The following are all different types of tests in software engineering.

: A/B testing is testing the comparison of two outputs where a single unit has changed. It is commonly used when trying to increase conversion rates for online websites. A real genius in this space is Patrick McKenzie and a few very worthwhile articles to read about it are How Stripe and AB Made me A Small Fortune and AB Testing

: Acceptance tests usually refer to tests performed by the customer. Also known as user acceptance testing or UAT. Smoke tests are considered an acceptance test.

: Accessibility tests are concerned with checking that the software is able to be used by those with vision, hearing or other impediments.

: Alpha testing consists of operational testing by potential users or an independent test team before the software is feature complete. It usually consists of an internal acceptance test before the software is released into beta testing.

: Beta testing follows alpha testing and is form of external user acceptance testing. Beta software is usually feature complete but with unknown bugs.

: Concurrent tests attempt to simulate the software in use under normal activity. The idea is to discover defects that occur in this situation that are unlikely to occur in other more granular tests.

: Conformance testing verifies that software conforms to specified standards. An example would checking a compiler or interpreter to see if it will work as expect against the language standards.

: Checks that software is compatible with other software on a system. Examples would be checking the Windows version, Java runtime version or that other software to be interfaced with have the appropriate API hooks.

: Destructive tests attempt to cause the software to fail. The idea being to check that software continues to work even with given unexpected conditions. Usually done through fuzzy testing and deliberately breaking subsystems such as the disk while the software is under test.

: Development testing is testing done by both the developer and tests during the development of the software. The idea is to prevent bugs during the development process and increase the quality of the software. Methodologies to do so include peer reviews, unit tests, code coverage and others.

: Functional tests generally consist of stories focussed around the users ability to perform actions or use cases checking if functionality works. An example would be “can the user save the document with changes”.

: Ensures that software is installed correctly and works as expected on a new piece of hardware or system. Commonly seen after software has been installed as a post check.

: Internationalisation tests check that localisation for other countries and cultures in the software is correct and inoffensive. Checks can include checking currency conversions, word range checks, font checks, timezone checks and the like.

Non functional
: Non functional tests test the parts of the software that are not covered by functional tests. These include things such as security or scalability which generally determine the quality of the product.

Performance / Load / Stress
: Performance load or stress testing is used to see how a system performance under certain high or low workload conditions. The idea is to see how the system performs under these conditions and can be used to measure scalability and resource usage.

: Regression tests are an extension of sanity checks which aim to ensure that previous defects which had a test written do not re-occur in a given software product.

: Realtime tests are to check systems which have specific timing constraints. For example trading systems or heart monitors. In these case real time tests are used.

Smoke / Sanity
: Smoke testing ensures that the software works for most of the functionality and can be considered a verification or acceptance test. Sanity testing determines if further testing is reasonable having checked a small set of functionality for flaws.

: Security testing concerned with testing that software protects against unauthorised access to confidential data.

: Usability tests are manual tests used to check that the user interface if any is understandable.

Syncing Stash/BitBucket with searchcode server

Recently it came up to perform a slight integration piece between a on premises Stash/BitBucket install and a searchcode server install. Thankfully both have an API and very thankfully there is a nice Python library for talking to Stash/BitBucket.

Below is the code used. It pulls out all of the repositories from every project, checks if it exists in searchcode and if not adds it as a repository to be indexed. You need to install stashy (pip install stashy) and run it whenever you have new repositories. One idea is to set it as a cron task and ensure everything is in sync.

Note that this does not remove repositories that have been indexed, but it would not take much work to achieve it.

import stashy
from hashlib import sha1
from hmac import new as hmac
import urllib2
import json
import urllib

def getstashrepos():
    stash = stashy.connect("https://mystashserver/", "STASH_USERNAME", "STASH_PASSWORD")

    projects = stash.projects.list()
    repos = [stash.projects[x['key']].repos.list() for x in projects]

    stashrepos = []

    for repo in repos:
        stashrepos = stashrepos + [{'name': x['project']['key'] + '-' + x['slug'],
                                    'cloneUrl': x['cloneUrl'],
                                    'browse': x['links']['self'][0]['href']} for x in repo]

    return stashrepos

def addtosearchcode(repo):
    reponame = repo['name']
    repourl = repo['cloneUrl']
    repotype = "git"
    repousername = "STASH_USERNAME"
    repopassword = "STASH_PASSWORD"
    reposource = repo['browse']
    repobranch = "master"

    message = "pub=%s&reponame=%s&repourl=%s&repotype=%s&repousername=%s&repopassword=%s&reposource=%s&repobranch=%s" % (

    sig = hmac(privatekey, message, sha1).hexdigest()

    url = "http://mysearchcodeserver/api/repo/add/?sig=%s&%s" % (urllib.quote_plus(sig), message)

    data = urllib2.urlopen(url)
    data = data.read()

    data = json.loads(data)
    print reponame, data['sucessful'], data['message']


message = "pub=%s" % (urllib.quote_plus(publickey))

sig = hmac(privatekey, message, sha1).hexdigest()
url = "http://mysearchcodeserver/api/repo/list/?sig=%s&%s" % (urllib.quote_plus(sig), message)

data = urllib2.urlopen(url)
data = data.read()

data = json.loads(data)
existingrepos = [x['name'] for x in data['repoResultList']]

for repo in getstashrepos():
    if repo['name'] not in existingrepos:

Python Fabric: Getting File from Host as String

When using fabric for deployments you will sometimes want check an existing file for the presence of a value before applying an update. A common example I run into is checking if an apt-source has already been added before adding it again. This is a little clunky in fabric, but thankfully you can write a simple helper which takes case of it for you.

def _get_remote(fileloc):
    '''Pulls back a file contents from connection as string'''
    from StringIO import StringIO

    fd = StringIO()
    get(fileloc, fd)
    content = fd.getvalue()
    return content

Usage is fairly simple. Say we want to install the latest version of Varnish Cache on an Ubuntu server. Usage like so works,

if 'https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0' not in _get_remote('/etc/apt/sources.list.d/varnish-cache.list'):
    sudo('curl https://repo.varnish-cache.org/ubuntu/GPG-key.txt | sudo apt-key add -')
    sudo('''sudo sh -c 'echo "deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0" >> /etc/apt/sources.list.d/varnish-cache.list' ''')
    sudo('apt-get -y update')

At this point you should be able to install the latest version of varnish with a simple apt-get.

searchcode.com: The Architecture – migration 3.0

27th July 2016 at about 9:30pm my local time (GMT +10) I updated the A records for searchcode’s nameservers to point at a new stack that has been several months in the making. As with most posts of this sort of nature a quick recap of where things were and where they are now.

The previous searchcode stack consisted of two dedicated servers hosted by Hetzner. I have previously discussed this about two years ago when discussing searchcode next. The first server was a reasonably powerful machine running pretty much all of code required to deliver searchcode.com itself with the exception of the actual index. searchcode is a Django application and was using nginx to serve results directly out of memcached where possible to avoid consuming a running Gunicorn process. The move to have another server for the index came pretty quickly after searchcode was released as it was just not performant enough with everything on a single box which was the situation for a short time.

searchcode before

This structure worked well for the last 2 years or so but I had noticed that the load average on the frontend was starting to average out at about 3.0+ and would quite often rise to 7.0+ Considering the machine had only 4 real CPU cores this was an issue. Interestingly it also caused the number of requests that searchcode could respond to to max out. In addition the MySQL database had some corruption somewhere in the middle which made to app increasingly unstable. The application was going down where nothing short of a reboot would save it at least once a month. Finally I had grown as a developer over the last two years and it was time to move to something more stable and better performing.

The first thing in this modern cloud world was start looking at moving to something like AWS or DigitalOcean. I firstly created a simple spreadsheet with what I was looking to move towards along with expected costs. In short I wanted to break searchcode apart so that it consisted of the following parts,

Frontend server. Would provide SSL termination to Varnish and reverse proxy back to application servers. Requires, either a lot of RAM or fast disk. If RAM is low but disk is fast can use Varnish with disk cache.

Backend server. Would run the application itself. Scales horizontally. Requires a fast CPU to process the code results.

Indexer server. Would host the Sphinx indexer. Scales horizontally. Requires a fast CPU, reasonably amounts of RAM and about 25 GB of amount of disk space per CPU core, preferably SSD.

Database server. Would host the MySQL database. Does not scale horizontally. Requires lots of RAM (more than 4 GB) and 2 TB of disk space.

With these requirements in mind. I started shopping around.

AWS was ruled out almost immediately due to the cost. This was in spite of the fact that I have a great deal of experience dealing with the AWS API’s. As a rule I run searchcode.com as lean as possible and AWS while brilliant ended up costing more than I would have liked.

I then started looking at DigitalOcean. I have always been a fan of how fast they spin instances up, the simple API and their prices. They also have excellent support and are pretty lenient when it comes to how much CPU and DiskIO you consume. When I started looking however they had not launched Block Storage, which meant I needed the $640 a month plan for the database which was even more expensive then AWS. Later they did release block storage, but the resulting price was still rather high. I still use DigitalOcean for spinning up test stacks.

I also looked at Vultr. It’s pretty safe to say they are a DigitalOcean clone but with more data centers, competitive prices and much worse customer support. They do have one intriguing server option however not mentioned on their public website, which is a storage server. It’s a server backed by a regular spinning rust style disk, but as such is far cheaper than anything offered by any other company. A database server with 1 TB of storage, 4 GB RAM and 2 CPU’s (the largest storage instance they have) is only $40 a month.

A believer in infrastructure as code I was very keen to move to one of the cloud servers and picked Vultr based on the storage instance and cost. I started importing the searchcode database into the storage instance. However about 3/4 the way through I received an email from Vultr support saying that I was exceeding the DiskIO of the storage instance using a sustained 125 IOPS. Fair enough, and I would have been willing to throttle it back if I could gain some assurance about the terms of service. Sadly Vultr lived up to their reputation of having poor support and I am yet to hear back. I terminated the instance and started looking again. They do offer dedicated instances and the prices are actually not too bad, but did not have enough disk space for my needs. I still use them for spinning up test stacks as they have a Sydney data centre which for me has lower network latency and offsets the slower creation time.

I realised that the only real option for something like searchcode (which is fully bootstrapped) is that I need as much power as possible and the lowest price. I also don’t mind getting my hands dirty and can deal without the support. Lastly I need to be able to abuse the machine as I see fit and the only way to do that is go dedicated.

Thankfully Hetzer has a very nice server auction house. You can browse through and pick up used servers for a considerable discount over a new order. In addition you don’t have to pay the setup fee. To avoid the database corruption issue I experienced previously I went shopping for a ECC RAM server for the database and quickly found a tidy machine with 32 GB RAM and enough disk space. For the frontend I just picked up the cheapest 32 GB machine I could find which interestingly was similar to my previous frontend machine but with 32 GB of RAM. Lastly I went looking for something to server as the backend and the indexer. I quickly realized that I could combine both the machines into one and save a few dollars. With this saving I was able to overlook the initial setup fee and went for two machines with 32 GB of RAM and 500 GB SSD’s. Since Hetzner cannot spin servers up and down like a cloud provider I hard-coded the details into my fab file for building the stack but otherwise everything deploys as though I was using a cloud provider. Only it costs considerably less and should be much much faster.

searchcode current

The results?

Well as mentioned the previous instances were sitting around a load average of 3.0+ most of the time. The new backend/indexer boxes are sitting at a load average of 0.1+ which is a massive improvement. The database and frontend are similarly loaded. The DNS at this point is still flipping over for some so its not serving all results yet but I cannot imagine the load rising beyond 1.0+ for everything.

With the hardware decisions discussed lets dive into the software starting with the data storage. MySQL has always been my database of choice simply because I am more familiar with it than any other database. I had previously toyed with migrating to Postgresql but decided against it simply because there are other things I should focus my time on that actually provide real value. As such I have stuck with that for the latest searchcode version however one change I did make was to upgrade to 5.7 so I can leverage the native JSON data type. Otherwise the only change was to modify the MySQL config to take advantage of the power of the box as per best practices.

The backend machines have Nginx set to listen to incoming connections from the frontend. They pass requests back to Gunicorn/Django which performs the appropriate action. Where possible the result using Django/Memcached is served directly, if not a request to Sphinx may be made for search results followed by a lookup to the database, or a direct request to the database. Gunicorn is configured to have four worker processes per box. Sphinx is configured using a distributed index with 4 agents running on each box and one box as the master. Memcached is currently configured with 8 GB of RAM for each backend instance but is not pooled together.

The frontend machine is configured using Varnish with 24 GB of the RAM allocated to cache. The remaining 8 GB is deliberately left over for use by OS caching and Nginx. Nginx is the frontend to the whole system providing SSL termination and proxying back to Varnish. I did briefly consider using HAProxy for this role, but since I was already familiar with Nginx and at the scale searchcode currently operates at Nginx was a better choice. If in time there is much greater load moving to HAProxy is something that will be considered.

Whats next? Well the future of searchcode.com at this point is to leverage the new machines to increase the size of the index and make it refresh the code more quickly. I have a few plans on how to do so and will release details in the next update. If you have read this far you are a beast! Thanks for the support and feel free to email with any further questions.