Why is this GoLang solution faster than the equivalent Java Solution?

The below is a small story about how I took a program with a runtime of hours to minutes, to tens of seconds to seconds. More for my own personal amusement then anything else.

At work there is a tradition of a Friday quiz being posted by the winner of the previous week. I missed out on the most recent one due to having to duck off early to do my tax but the problem was rather an interesting one.

The challange itself is not as simple as you would initally think and taken from a 2015 IBM Ponder This https://www.research.ibm.com/haifa/ponderthis/challenges/May2015.html

Three people are playing the following betting game.
Every five minutes, a turn takes place in which a random player rests and the other two bet
against one another with all of their money.
The player with the smaller amount of money always wins,
doubling his money by taking it from the loser.
For example, if the initial amounts of money are 1, 4, and 6,
then the result of the first turn can be either
2,3,6 (1 wins against 4);
1,8,2 (4 wins against 6); or
2,4,5 (1 wins against 6).
If two players with the same amount of money play against one another,
the game immediately ends for all three players.
Find initial amounts of money for the three players, where none of the three has more than 255,
and in such a way that the game cannot end in less than one hour. (So at least 12 turns)
In the example above (1,4,6), there is no way to end the game in less than 15 minutes.
All numbers must be positive integers.

Only one person managed to find an answer, lets call him Josh (because that’s his name), having spent a few hours writing up a solution using his favourite programming language Go. Come Monday morning I arrived, looked at the quiz I and became intrigued. Could I write a version that would outperform his. After all Go is a pretty performant language, but I suspected that he may have missed some easy optimisations, and if I picked something equally as fast I should be able to at least equal it.

I took a copy of his code https://gist.github.com/walesey/e2427c28a859c4f7bc920c9af2858492 (since modified to be much faster) with a runtime of 1 minute 40 seconds (on my laptop) started work.

Looking at the problem itself we have a few pieces of information we can use. One is that we don’t ever need to calculate the 12th turn. If the game makes the 11th turn and still continues then it made 12 so that saves us some calculations. Another is that if any money amounts are the same we can stop the game instantlty, and more importantly not even add those to the loop.

Given that the existing solution was written in Go it seemed insane to some that I started writing my solution at least initally in Python. This however is not as crazy as it seems. Python being higly malleable allow some rapid iteration, trying out a few things before moving over to another language.

The first thing to note is that you need to generate a tree of every possible combination that a game can take. Then you iterate over each one for the inital starting amounts of money to determine if the game ever ends, and if not mark it as a game that does not finish. To generate the combinations I went with a fairly simple recursive strategy,

# Calculate all the possible sequences for who misses out for each turn
def calc_events(current=[], turn=0):
    if turn == DESIRED_TURNS:
        return [current]

    one = list(current)

    two = list(current)

    three = list(current)

    turn += 1
    path1 = calc_events(current=one, turn=turn)
    path2 = calc_events(current=two, turn=turn)
    path3 = calc_events(current=three, turn=turn)

    return path1 + path2 + path3

The result of running the above is an array of arrays containing each situation where someone sits out a turn. It is important to note that his produces a list that modifies from the back (big-endian so to speak) as this will be a very important consideration later.

The result looks something like this (truncated to just 4 results),

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]

Given the above I coded up a simple solution which simply looped through the combinations. Here it is in pseudocode.

for player1 money:
  for player2 money:
    for player3 money:
      for game in games:
        for turn in game:
          result = play turn using money amounts
        if result:
          print player1, player2, player3

The above is about as bad as it gets for an algorithm as we have 5 nested loops. As expected the runtime performance was horrible. In fact I added a progress bar which simply calculated how far into the first loop the application had reached. Using Python I worked out that it was going to take several hours to get a result using the above. Not good enough.

A few calculations and I realised that I was asking my poor program to calculate something like 50 billion games. Clearly I needed to reduce the number of games and speed up the processing as much as possible. The first change I made was to only calculate the game turn events once and keep the result. I had also written the code to be as readable as possible, which how you should write anything but as this can be an issue also removed the following function by inlining it for improved performance.

def turn(p1, p2):
    if p1 == p2:
        return False, p1, p2

    if p1 > p2:
        p1 = p1 - p2
        p2 = p2 + p2
        p2 = p2 - p1
        p1 = p1 + p1
    return True, p1, p2

The next thing I did was generate all the permutations of money amounts to play games with into a single list. The last thing I did was switch from using Python to PyPy which with its JIT should speed up the loops considerably. The result of all this can be found here https://gist.github.com/boyter/cab749f4713201f5b409c5b1353fc36c and its runtime using PyPy dropped to ~8 minutes.

8 minutes was about 5 times slower then the GoLang program at this point, which is pretty good for a dynamic language like Python, and consider that my implementation was single threaded where as the Go was using as many cores as it could get. My next step was to implement the same program in a faster language. The only other faster languages I know that I have any experience in are C# and Java. Since I already had Java setup I went with that. I will also admint it was about proving a point. Go may be fast, but at writing Java should be able to equal it for most tasks.

At this point however I mentioned to Josh that his Go program had some inefficiencies. The big one being that he was calculating the 12th game needlessly. I then modified his program and with some other changed reduced the Go program runtime down to ~40 seconds. I was starting to get worried at this point, as I was aiming to beat the Go program’s performance by 5%. No idea why I picked this number but it seemed reasonable if I was smart with the implementation.

At first I ported the Python program in its original readable reusable form to Java and ran it. The runtime was ~7 minutes. I then inlined the same functions and converted it over to use parallel streams. This time the runtime was about 90 seconds. This would have been fast enough had I not mentioned to Josh how he could improve his code. I had shifted the goalposts on myself and had a new target now of ~40 seconds.

After doing a few things such as inlining where possible, changing to enum and some other small tweaks I had a runtime of ~60 seconds. The big breakthrough I had was after looking at the hot function I realised that storing the game events in a List of Arrays meant that we had a loop in a loop. It was however possible to flatten this into a single array of integers and reset the game every 11 turns with a simple if check.

This was the breakthrough I needed. Suddently the runtime dropped from ~60 seconds to about 23 seconds. I happily posted my success in the Slack channel with a link to the code and sat back.

The smile was soon turned to a frown. Josh implemented the same logic into his Go program and it now ran in ~6 seconds. It was suddently 5x times faster. At this point I had a mild panic and started playing with bitwise operations and other micro optimisations before realising that no matter what I did he could simply implement the same change and get the same performance benefit as we were both using roughtly the same algorithm.

Something was clearly wrong. Either Go was suddently 5x faster at a basic for loop and integer math then Java OR I had a bug in my code somewhere which was making it worse. I looked. Josh looked. A few other people looked. Nobody could work out what the issue was. At this point I didn’t care about the runtime, I just wanted to know WHY did it appear to be running slower. I was so desperate for an answer I did what all programmers do at this point and outsourced to the collective brain known as Stack Overflow http://stackoverflow.com/questions/43082115/why-is-this-golang-solution-faster-then-the-equivalent-java-solution

A few un-constructive comments came back such as Java is slow (seriously its not 1995 anymore guys, Java is fast) etc… Thankfully one brilliant person managed to point out what I had missed. It was not the loop itself, but the input. Remember how I said that the generation of the events being big-endian was important? Turns out the Go program had done the reverse and implemented it little-endian.

The thing about the core loop is that it has a bail-out condition. If two players have the same money amount we end the game and don’t process any further. The worst possible situation for the loop is to process almost every condition only to find out just at the end that it ended. Its a lot of processing work. Ideally you want to find the failing conditions as soon as possible. By changing the games from the end I was forcing the Java program to process about 5x times as many combinations as the Go program.

It just happended to be that Josh has picked a more optimal path through the games.

A simple reverse of the games (line 126 of the linked solution) and suddenly the Java program was running in about ~6 seconds and the same time as the Go program. You can view the code where https://gist.github.com/boyter/42df7f203c0932e37980f7974c017ec5

For fun I tried running it on a 16 core VPS and it ran in about ~2 seconds and maxed out all the cores so it seems parallel streams do what you expect them to.

Interestingly while Josh’s starting positions was more optimal then mine, its probably still not the optimal path for this problem. There is bound to be a way to generate the game such that you hit the failing conditions as soon as possible saving needless processing. There is probably a Thesis for a PhD in there somewhere.

I figure this is probably as far as I want to take this. I did play around with bitwise operations and loop un-rolling but the time didn’t change that much.

I certainly had fun with the implementation and working things out. Some may argue that optimising for a micro benchmark such as this is a waste of time, and generally they would be right. That said there are occasions where you really do need to optimise the hell out of something, say a diff algorithm or some such. In any case what developer does not dream of saving the day with some hand unrolled loop optimisation that saves the company millions and brings the developer the praise and respect of their peers!

EDIT – A rather smart person by the name of dietrichepp pointed out that there is a better algorithm. Rather than brute force the states work backwards. You can read their comment on Hacker News and view their code in C++ on Github.

Setup up ConcourseCI 2.6.0 behind Nginx with Self Signed Certificates on Ubuntu 16.04

Concourse CI is a very nice continuous integration server.

However for installs there are a few gotcha’s you need to keep in mind. Mostly these relate to how TLS/SSL works.

The first is that while it is possible to run concourse inside Docker I found this to cause a lot of issues with workers dying and not recovering. I would suggest installing the binarys on bare machines. When I moved from a docker cluser using Amazon’s ECS to a single t2.large instance not only were builds faster but it was a far more reliable solution.

I am also not going to automate this install, and will leave it as an excercise for you the reader to do this yourself. I would suggest using Python Fabric, or something like Puppet, Ansible or Saltstack to achive this.

Also keep in mind that with this install everything is running on a single instance. If you have need to scale out this is not going to work, but as a way to get started quickly it works pretty well.

Prerequisites are that you have a Ubuntu instance running somewhere. If you want to run the fly execute command you will also need a valid domain name to point at your machine. This is an annoying thing caused by GoLang when using SSL certs. Turns out you cannot set a hostfile entry and use it as such. You can in a insecure non SSL mode but otherwise cannot.

If you are using a virual machine from DigitalOcean/AWS/Vultr or other you will need to add some swap space. I noticed a lot of issues where this was missing. You can do so by running the following commands which will configure your server to have 4G of swap space,

sudo fallocate -l 4G /swapfile
 sudo chmod 600 /swapfile
 sudo mkswap /swapfile
 sudo swapon /swapfile
 sudo echo "/swapfile none swap sw 0 0" >> /etc/fstab

We will need to get the concourse binary, and to make it executable. For convenience and to match the concourse documentation lets also rename it to concourse. To do so run the following command.

wget https://github.com/concourse/concourse/releases/download/v2.6.0/concourse_linux_amd64 && mv concourse_linux_amd64 concourse && chmod +x concourse

We now need to generate the keys that concourse requires.

mkdir keys
 cd keys

ssh-keygen -t rsa -f tsa_host_key -N '' && ssh-keygen -t rsa -f worker_key -N '' && ssh-keygen -t rsa -f session_signing_key -N '' && cp worker_key.pub authorized_worker_keys
 cd ..

The above commands will create a directory called keys and setup all of the keys that concourse 2.6.0 requires.

We can now create some helper scripts which we can use to run concourse easily.

pico concourse.sh

./concourse web \
 --basic-auth-username main \
 --basic-auth-password MySuperPassword \
 --session-signing-key ./keys/session_signing_key \
 --tsa-host-key ./keys/tsa_host_key \
 --tsa-authorized-keys ./keys/authorized_worker_keys \
 --external-url https://YOURDNSHERE/ \
 --postgres-data-source postgres://concourse:concourse@

chmod +x concourse.sh

This script will start running concourse. Keep in mind that the username and password used here are for the main group and as such you should protect them as they have the ability to create additional groups on your concourse instance.

pico worker.sh

./concourse worker \
 --work-dir /opt/concourse/worker \
 --tsa-host \
 --tsa-public-key ./keys/tsa_host_key.pub \
 --tsa-worker-private-key ./keys/worker_key

chmod +x worker.sh

This script will spin up a worker which will communicate with the main concourse instance and do all the building. It can be useful to lower the priority of this command using nice and ionice if you are running on a single core machine.

Now we need to install all of the postgresql packages required,

apt-get update && apt-get install -y postgresql postgresql-contrib

Once this is done we can create the database to be used

sudo -u postgres createdb concourse

Then login to postgresql and create a user to connect to the database

sudo -u postgres psql
 CREATE USER concourse WITH PASSWORD 'concourse'
 GRANT ALL PRIVILEGES ON DATABASE "concourse" to concourse

We also need to need to edit the pg_hba file allowing us to make the connection,

sudo pico /etc/postgresql/9.5/main/pg_hba.conf

Scroll down and look for the following line,

host all all md5

and change the md5 on the end to trust

host all all trust

Then save the file and restart postgresql

service postgresql restart

At this point everything we need to run concourse should be there. You will need to setup the concourse scripts we created earlier to run as a service, or just run them in a screen session if you are in a hurry.

What we want to do now is expose it to the big bad internet.

apt-get install nginx

Create a directory using either the domain name you want to use, a desired name or anything if you are going to connect to things using IP addresses.

mkdir -p /etc/nginx/ssl/mydesireddomain.com

Nowe we want to swtich to the directory and setup the self signed TLS/SSL keys.

cd /etc/nginx/ssl/mydesireddomain.com
 openssl genrsa -des3 -out server.key 1024

Enter whatever you want for the passphrase but remember it!

openssl req -new -key server.key -out server.csr

Enter the passhrase entered. The most important thing here is that when asked for the Common Name or FQDN you need to enter in the desired domain name.

With that done we need to sign the key.

cp server.key server.key.org
 openssl rsa -in server.key.org -out server.key

Remember to enter the same pass phrase as before. Finally sign they key with an expiry of 9999 days.

openssl x509 -req -days 9999 -in server.csr -signkey server.key -out server.crt

Make a copy of the file server.crt which will be needed for the concourse fly tool to talk to the server if you are using self signed certs.

With that done lets enable the site,

sudo nano /etc/nginx/sites-available/mydesireddomain.com

And enter in the following details,

upstream concourse_app_server {
 server localhost:8080;

server {
 listen 80 default_server;
 rewrite ^ https://MYIPADDRESSORDOAMIN$request_uri? permanent;

server {
 listen 443 default_server ssl http2;
 server_name mydesireddomain.com;

ssl on;
 ssl_certificate /etc/nginx/ssl/mydesireddomain.com/server.crt;
 ssl_certificate_key /etc/nginx/ssl/mydesireddomain.com/server.key;

location / {
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header Host $http_host;
 proxy_redirect off;
 proxy_pass http://concourse_app_server;

The above nginx config defines an upstream concourse server running on port 8080. It then defines a server listening on port 80 that redirects all traffic to the same server on HTTPS. The last server config defines out site, sets up the keys and forwards everything to the upstream concourse server.

We can now enable it,

sudo ln -s /etc/nginx/sites-available/mydesireddomain.com /etc/nginx/sites-enabled/mydesireddomain.com
 rm -f /etc/nginx/sites-available/default

service nginx restart

We need to remove the default file above because nginx will not allow you to have two default servers.
At this point everything should be working. You should now be able to connect to your concourse server like so,

fly --target myciserver login --team-name myteamname --concourse-url https://MYIPADDRESSORDOAMIN --ca-cert ca.crt

Where the file ca.crt exists whereever you are running fly. Everything at this point should work and you can browse to your concourse server.

If you are using the IP address to communicate to your concourse server you have the limitation that you will not be able to run fly execute to upload a task. You can get around this by using a real domain, or running your own DNS to resolve it correctly.

The only bit of homework at this point would be to configure the firewall to disable access to port 8080 so that everything must go though nginx. Enjoy!

Repository overview now in searchcode server

One feature that I have wanted for a long time in searchcode server was a page which would give an overview of a repository. I wanted the overview to give a very high look at the languages used, the total number of files, estimated cost and who would be the best people to talk to.

One thing that occurred to me when I started work was that it would be nice to calculate a bus factor for the repository as well. After all we all know that project managers do like to know who are the most critical contributors to any project and what the risk is.

Below is the what has been added into searchcode server.

The most interesting part of the above in my opinion is the overview blurb. It attempts to summarise all of the figures below and let anyone know in plain english where the repository stands.

An example for searchcode server’s code itself,

“In this repository 2 committers have contributed to 228 files. The most important languages in this repository are Java, CSS and Freemarker Template. The project has a low bus factor of 2 and will be in trouble if Ben Boyter is hit by a bus. The average person who commits this project has ownership of 50% of files. The project relies on the following people; Ben Boyter, =.”

Certainly there is room for improvement on this page and I am hoping to add what I am calling signals logic to it. This would involve scanning the code to determine what languages, features and libraries are being used and add those to the report. The end goal would be to find for instance C# code using MySQL and ReactJS.

The last bit of news is that I am moving searchcode.com over to the same codebase. This should improve things in a few ways. The first bring the improved performance when moving from Python to Java. It should also mean that I can focus on a single codebase.

Anyway you will be able to get the new repository overview in the next release of searchcode server 1.3.6 which will be released before the end of January 2017.

Sphinx Real Time Index How to Distribute and Hidden Gotcha

I have been working on real time indexes with Sphinx recently for the next version of searchcode.com and ran into a few things that were either difficult to search for or just not covered anywhere.

The first is how to implement a distributed search using real time indexes. It’s actually done the same way you would normally create an index. Say you had a single server with 4 index shards on it and you wanted to run queries against it. You could use the following,

index rt
    type = distributed
    local = rt1
    agent = localhost:9312:rt2
    agent = localhost:9312:rt3
    agent = localhost:9312:rt4

You would need to have each one of your indexes defined (only one is added here to keep the example short)

index rt1
    type = rt1
    path = /usr/local/sphinx/data/rt1
    rt_field = title
    rt_field = content
    rt_attr_uint = gid

Using the above you would be able to search across all of the shards. The trick is knowing that to update you need to update each shard yourself. You cannot pass documents to the distributed index but instead must make a separate update to each shard. Usually I split sphinx shards based on a query like the following,

SELECT cast((select id from table order by 1 desc limit 1)/4 as UNSIGNED)*2, \
         cast((select id from table order by 1 desc limit 1)/4 as UNSIGNED)*3 \
         FROM table limit 1

Where the 4 is the number of shards and the multiplier splits the shards out. It’s performant due to index use. However for RT I suggest a simple modulas operator % against the ID column for each shard as it allows you to continue to scale out to each shard equally.

The second issue I ran into was that when defining the attributes and fields you must define all the fields before the uints. The above examples work fine but the below is incorrect. I couldn’t find this mentioned in the documentation.

index rt
    type = rt
    path = /usr/local/sphinx/data/rt
    rt_attr_uint = gid # this should be below the rt_fields
    rt_field = title
    rt_field = content

Explaining VarnishHist – What Does it Tell Us

The varnishhist tool is one of the most underused varnish tools that come with your standard varnish install. Probably because of how it appears at first glance.

In short, you want as many | symbols as possible and you want everything far toward the left hand side. The closer to the left the faster the responses are regardless if they are cached or not. The more | symbols then more items were served from cache.

A small guide,

'|' is cache HIT
'#' is cache MISS
'n:m' numbers in left top corner is vertical scale
'n = 2000' is number of requests that are being displayed (from 1 to 2000)

The X-axis is logarithmic time between request request from kernel to Varnish and response from Varnish to kernel.

The times on the X-axis are as such,

1e1 = 10 sec
1e0 = 1 sec
1e-1 = 0.1 secs or 100 ms (milliseconds)
1e-2 = 0.01 secs or 10 ms
1e-3 = 0.001 secs or 1 ms or 1000 µs (microseconds)
1e-4 = 0.0001 secs or 0.1 ms or 100 µs
1e-5 = 0.00001 secs or 0.01 ms or 10 µs
1e-6 = 0.000001 secs or 0.001 ms or 1 µs or 1000 ns (nanoseconds)

Below is the varnishhist for searchcode.com showing that while most responses are served in about 100ms not many are cached. This can mean one of a few things.

  • The responses are not cache-able and you need to adjust the back-end responses to have the correct headers (or override the settings with VCL config).
  • The cache timeout for the back-end responses isn’t high enough to ensure that later requests are served from cache.
  • There isn’t a large enough cache to hold all the responses (that’s the problem in this case).
1:20, n = 2000

            |                    ####
            |                    ####
            |                    ####
            |                    #####
            |                    #####
            |                    #####
            |                   #######
            |                   #######
            ||  |    #      #   ##########
|1e-6  |1e-5  |1e-4  |1e-3  |1e-2  |1e-1  |1e0   |1e1   |1e2

MySQL Dump Without Impacting Queries

Posted more for my personal use (I have to look it up every time) but here is how to run a mysqldump without impacting performance on the box. It sets the ionice and nice values to be as low as possible (but still run) and uses a single transaction and ups the max packet size for MySQL.

ionice -c2 -n7 nice -n19 mysqldump -u root -p DATABASE --single-transaction --max_allowed_packet=512M > FILENAME

To all Companies Currently Recruiting

I am writing this on behalf of all developers/engineers out there. Please stop with the take home coding challenge questions. Really. Just stop it. They are a lazy and frankly an unprofessional way of sorting the wheat from the chaff. Before closing your browser in disgust hear me out on this one and hopefully I can convince you of the error of your ways.

There has become an alarming trend these days of companies during the hiring process to issue lengthy coding challenges in order to prove that the individual they are hiring knows their stuff. I totally understand why you might be doing this but frankly its flawed. Lets go through several reasons why.

1. It shows that you have a lack of regard for the individuals time. This is a serious red flag. After all if you fail to respect my time why should I respect yours? Consider it this way, for every hour of time that your test takes you are taking away 1 hour of that persons family/rest/sleep/eating/hobby time. What is your commitment in this situation? If you aren’t willing to invest the time in the hiring process then why should we? Sure you need to review the code (and hopefully give feedback) but that usually takes less than 15 minutes (yes I have done this and you can do it very quickly). The time investment balance is horribly skewed in your favour, which starts the relationship off to a bad start. Lets also consider those who do consulting or contracting outside. If you ask them to do a 3 hour coding challenge then you are asking them to forgo somewhere in the region of $300-700. Even worse is that usually at the end of these tests all you get back is “Pass” or “Fail”. Even when you do give some feedback there is no way it pays back the amount of time that went in.

2. Long tests are irrelevant anyway. When you set someone up to solve a problem that takes several hours you are setting them up to fail. The test you set is usually set up such that you are looking for a specific answer and only that answer. It might be obvious to you that using the strategy pattern is the best way to solve your test, but what about those who’s development backgrounds require business logic rules to be user editable and hence live in a database where it can be modified? You are also going to nitpick over every little detail in a classic case of bike shedding. “They didn’t clean up some spaces!” “They didn’t comment this method!” “They did comment this method!” Even aware of this problem I still find myself falling into the trap of nitpicking over the small stuff that usually doesn’t matter. You are going to be investing time in training them how you want things anyway so why expect them to know it without even working with you?

3. It’s going to drive away talented individuals. I am not counting myself in this category but I know of many talented dev’s who simply refuse to do any sort of test. Why? They have a huge online portfolios of work they have previously done. Why do you even ask for our Github/Gitlab profiles if not to look at them? Surely 5 minutes of looking though would let you know if this person is a pretender or knows their stuff.

4. The hiring process is skewed towards the hirer. People looking for work are usually in one of two situations. Either they have a role and are looking around or they don’t have one. Lets consider the risk for both groups.

For those with a job they will need to resign from their current position to take an opportunity with you. There is usually a probation period following the hiring that either party can say “No this isn’t for me lets agree to be friends”. The hiree is the one taking on the largest risk in this situation. They have already left what possibly was a stable situation for an unstable situation. The company takes on the risk of potentially investing in someone and they leave. Realistically though the company is in the position to hire so renumeration is the least of their problems. For the hiree though the potential is that they join and after a month get the boot. They are now without income and back to looking for a job, only now they are in the latter category.

For those without a job, they have the situation that they might find employment and in doing so passing on other opportunities only to get the boot after a few months. Once again its skewed towards the company.

5. They are easily gamed. A quick check on freelancer or other such sites shows that I can probably buy a solution to any coding challenge you have for less than $300. For a job with a steady pay-check even if its only for a few months (before you find out I can’t code a damn) this is a no brainer especially if I know my coding chops aren’t up to standard. But why pay at all? A lot of the solutions are posted on online forums and Github already. Heck just ask your best mate who happens to know their stuff to do it for you. Whats the cost to the organisation here? Well assuming you do hire me based on my fraudulent test I could lurk in the company for months, either contributing nothing or perhaps causing all sorts of damage. At the very least you have paid several months salary.

A-hah you say! I can defeat what you have written above! I will just have another test during the interview process I can hear you thinking. DING DING DING WINNER! However, if you are going to do that anyway why not drop the initial challenge? After all what are you gaining?

Here’s what I propose. Bring the person you want to interview in and have them actually code on a machine in front of you. Work through a few simple problems together. It dons’t need to be complex. Ask them to write a function that reverses all the words in a string but not the string itself. Ask them how to find elements that existing in list A that are not in B. Ask them why they implemented certain patterns, how did they decide on data types, why did they comment/not comment a method. Let them know in advance that this is what is going to happen and what will be expected. You will learn far more in this short investment of time then with any coding challenge. If you want to get more in depth then why not offer to pay them for a single day to come in and actually work. The monetary cost will be less than making the wrong hire and you both get to decide if things are going to work out.

The best interview process I have been with to date was actually the Microsoft one for a intern role many many moons ago. It involved several hours of interviews with different individuals discussing different technology roles to try and ascertain the best fit.

When I walked out (I didn’t get the role BTW) I felt like a better person. Not only were the discussions interesting, I learnt a lot from those conducting them and I felt like my time was valued. They even offered to cover my travel expenses which made me feel like they cared about my time investment. This is how the process is meant to work. Its as much about the person being interviewed as about the company. Consider it an investment in your advertising budget if you are tracking the time investment (yes it is an investment!) as a cost. Good interviews stick with people a long time and GREAT ones make those people want to praise your company.

GPL Time-bomb an interesting approach to #FOSS licensing

UPDATES Following some feedback I am going to rename my usage of “Time-Bomb” due to potential negative connotation on the words. I am going to call it “Eventually Open”. Also a few other things need mentioning. I am not looking for code submissions back into the source at this time. This was a move to show that there are no back-doors in the code sending source code back to a master server.

About a week ago I released searchcode server under the fair source licence. From day one I had wanted to release it using some form of licence where the code was available but I wanted to lock it somewhat because frankly I do want to make some money out of my time investment. That’s not the whole story however. I did not want to create another “Look but don’t touch” situation forever and I certainly didn’t want searchcode to be constrained by a licence in the event that I die, lose interest or stop updating the code.

The result of this was that I have added what I am going to call a GPL Time-Bomb into into the licencing of searchcode server. Here is how it works. After a specified period of time the current version of searchcode server can be re-licensed under the GPL v3. This is a shifting date such that each new release extends its own time-bomb further into the future. However the older releases time is still fixed. The time-bomb for version 1.2.3 and 1.2.4 takes place on the 27th August 2019 at which point you can take the source using GPL 3.0. Assuming searchcode server 6.1.2 comes out at roughly the same time its time-bomb will be set to the 27th of August 2022 but the 1.2.3 release will be unaffected.

In short I have put a time limit of 3 years to make money out of the product and if I am unable it is turned over to the world to use as they see fit. Even better, assuming searchcode server becomes a successful product I will be forced to continually improve it and upgrade if I want to keep a for sale version without there being an equivalent FOSS version around (which in theory could be maintained by the community). In short everyone wins from this arrangement, and I am not forced to rely on a support model to pay the bills which frankly only works when you have a large sales team.

Here’s hoping this sort of licencing catches on as there are so many products out there that could benefit from it. If they take off the creators have an incentive to maintain and not milk their creation and those that become abandoned even up available for public use which I feel is a really fair way of licencing software.

Agree? Disagree? Email me or hit me up on twitter.

searchcode server under fair source

A very quick blog today. I have released searchcode server under the fair source licence. This means that as of a few days ago you can view the source, change it modify it and run it as you see fit so long as you have less than 5 users.

The source is hosted on github (I may move this to GitLab sometime in the future) and you can view it here.

So what does this mean? Well the community edition still exists (run searchcode with as many users as you want) as do the paid versions with support and all the full features. The real advantage however is that you can now vet the source code to ensure that searchcode server is not secretly sending your most valuable asset to some hidden server somewhere. In addition it means I can now talk about the source openly and will be writing some posts about how I ran into some CPU branching issues which slowed down some code.

Good news all around then. Be sure to check out the source and let me know what you think.

The Worst Individual I Ever Worked With

Taken from a comment I posted on HN in a thread about a Soccer Con Man.

Not actually a programmer. The guy was hired to be a project manager.

After joining things were as expected but after a few weeks we noticed that he was rarely around after lunch and never around after lunch on a Friday.

We would email him at those times deliberately to catch him out and I recall starting to put sticky notes on his laptop “Came to see you a X time”. He would come back and just dump all the notes in the rubbish and claim he never got them. He would often claim to be working from home, despite his laptop being on his desk and usually closed. He would also never responding during those times to email or IM.

A classic seagull manager he would appear when something went wrong, making a lot of noise, writing a lot of emails and then vanishing. He would also be sure to be seen when something was delivered often staying back late on those times.

It got so bad one friend of mine started tracking when he was around and then tracking when one of his relatives died. During his tenure the following incidents occurred,

– Hot water system blew up. 4 times. He had pictures which he would show all the time.

– Uncles, aunts, and various over family members died to the total of 20 individuals.

– Our time tracking him showed him to be in the office less than 15 hours a week on average.

We started to suspect he had a second job and was pulling the same con on them. This was never proved, but we did find someone who had worked with him previously and they reported the same behavior.

The worst thing was it was raised with management at least several dozen times and nothing ever happened. He managed to pull this scam off for 4 years. I could not believe the waste of money this guy was, literally $500,000 burnt on a useless individual.