searchcode.com: The Architecture – migration 3.0

27th July 2016 at about 9:30pm my local time (GMT +10) I updated the A records for searchcode’s nameservers to point at a new stack that has been several months in the making. As with most posts of this sort of nature a quick recap of where things were and where they are now.

The previous searchcode stack consisted of two dedicated servers hosted by Hetzner. I have previously discussed this about two years ago when discussing searchcode next. The first server was a reasonably powerful machine running pretty much all of code required to deliver searchcode.com itself with the exception of the actual index. searchcode is a Django application and was using nginx to serve results directly out of memcached where possible to avoid consuming a running Gunicorn process. The move to have another server for the index came pretty quickly after searchcode was released as it was just not performant enough with everything on a single box which was the situation for a short time.

searchcode before

This structure worked well for the last 2 years or so but I had noticed that the load average on the frontend was starting to average out at about 3.0+ and would quite often rise to 7.0+ Considering the machine had only 4 real CPU cores this was an issue. Interestingly it also caused the number of requests that searchcode could respond to to max out. In addition the MySQL database had some corruption somewhere in the middle which made to app increasingly unstable. The application was going down where nothing short of a reboot would save it at least once a month. Finally I had grown as a developer over the last two years and it was time to move to something more stable and better performing.

The first thing in this modern cloud world was start looking at moving to something like AWS or DigitalOcean. I firstly created a simple spreadsheet with what I was looking to move towards along with expected costs. In short I wanted to break searchcode apart so that it consisted of the following parts,

Frontend server. Would provide SSL termination to Varnish and reverse proxy back to application servers. Requires, either a lot of RAM or fast disk. If RAM is low but disk is fast can use Varnish with disk cache.

Backend server. Would run the application itself. Scales horizontally. Requires a fast CPU to process the code results.

Indexer server. Would host the Sphinx indexer. Scales horizontally. Requires a fast CPU, reasonably amounts of RAM and about 25 GB of amount of disk space per CPU core, preferably SSD.

Database server. Would host the MySQL database. Does not scale horizontally. Requires lots of RAM (more than 4 GB) and 2 TB of disk space.

With these requirements in mind. I started shopping around.

AWS was ruled out almost immediately due to the cost. This was in spite of the fact that I have a great deal of experience dealing with the AWS API’s. As a rule I run searchcode.com as lean as possible and AWS while brilliant ended up costing more than I would have liked.

I then started looking at DigitalOcean. I have always been a fan of how fast they spin instances up, the simple API and their prices. They also have excellent support and are pretty lenient when it comes to how much CPU and DiskIO you consume. When I started looking however they had not launched Block Storage, which meant I needed the $640 a month plan for the database which was even more expensive then AWS. Later they did release block storage, but the resulting price was still rather high. I still use DigitalOcean for spinning up test stacks.

I also looked at Vultr. It’s pretty safe to say they are a DigitalOcean clone but with more data centers, competitive prices and much worse customer support. They do have one intriguing server option however not mentioned on their public website, which is a storage server. It’s a server backed by a regular spinning rust style disk, but as such is far cheaper than anything offered by any other company. A database server with 1 TB of storage, 4 GB RAM and 2 CPU’s (the largest storage instance they have) is only $40 a month.

A believer in infrastructure as code I was very keen to move to one of the cloud servers and picked Vultr based on the storage instance and cost. I started importing the searchcode database into the storage instance. However about 3/4 the way through I received an email from Vultr support saying that I was exceeding the DiskIO of the storage instance using a sustained 125 IOPS. Fair enough, and I would have been willing to throttle it back if I could gain some assurance about the terms of service. Sadly Vultr lived up to their reputation of having poor support and I am yet to hear back. I terminated the instance and started looking again. They do offer dedicated instances and the prices are actually not too bad, but did not have enough disk space for my needs.

I realized that the only real option for something like searchcode (which is fully bootstrapped) is that I need as much power as possible and the lowest price. I also don’t mind getting my hands dirty and can deal without the support. Lastly I need to be able to abuse the machine as I see fit and the only way to do that is go dedicated.

Thankfully Hetzer has a very nice server auction house. You can browse through and pick up used servers for a considerable discount over a new order. In addition you don’t have to pay the setup fee. To avoid the database corruption issue I experienced previously I went shopping for a ECC RAM server for the database and quickly found a tidy machine with 32 GB RAM and enough disk space. For the frontend I just picked up the cheapest 32 GB machine I could find which interestingly was similar to my previous frontend machine but with 32 GB of RAM. Lastly I went looking for something to server as the backend and the indexer. I quickly realized that I could combine both the machines into one and save a few dollars. With this saving I was able to overlook the initial setup fee and went for two machines with 32 GB of RAM and 500 GB SSD’s. Since Hetzner cannot spin servers up and down like a cloud provider I hard-coded the details into my fab file for building the stack but otherwise everything deploys as though I was using a cloud provider. Only it costs considerably less and should be much much faster.

searchcode current

The results?

Well as mentioned the previous instances were sitting around a load average of 3.0+ most of the time. The new backend/indexer boxes are sitting at a load average of 0.1+ which is a massive improvement. The database and frontend are similarly loaded. The DNS at this point is still flipping over for some so its not serving all results yet but I cannot imagine the load rising beyond 0.3+ for everything.

With the hardware decisions discussed lets dive into the software starting with the data storage. MySQL has always been my database of choice simply because I am more familiar with it than any other database. I had previously toyed with migrating to Postgresql but decided against it simply because there are other things I should focus my time on that actually provide real value. As such I have stuck with that for the latest searchcode version however one change I did make was to upgrade to 5.7 so I can leverage the native JSON data type. Otherwise the only change was to modify the MySQL config to take advantage of the power of the box as per best practices.

The backend machines have Nginx set to listen to incoming connections from the frontend. They pass requests back to Gunicorn/Django which performs the appropriate action. Where possible the result using Django/Memcached is served directly, if not a request to Sphinx may be made for search results followed by a lookup to the database, or a direct request to the database. Gunicorn is configured to have four worker processes per box. Sphinx is configured using a distributed index with 4 agents running on each box and one box as the master. Memcached is currently configured with 4 GB of RAM for each backend instance but is not pooled together.

The frontend machine is configured using Varnish with 24 GB of the RAM allocated to cache. The remaining 8 GB is deliberately left over for use by OS caching and Nginx. Nginx is the frontend to the whole system providing SSL termination and proxying back to Varnish. I did briefly consider using HAProxy for this role, but since I was already familiar with Nginx and at the scale searchcode currently operates at Nginx was a better choice. If in time there is much greater load moving to HAProxy is something that will be considered.

Whats next? Well the future of searchcode.com at this point is to leverage the new machines to increase the size of the index and make it refresh the code more quickly. I have a few plans on how to do so and will release details in the next update. If you have read this far you are a beast! Thanks for the support and feel free to email with any further questions.

How to Hide Methods From Fabric Task Listing

Occasionally you may want to hide a method from appearing inside the fabric listing of available tasks. Usually its some sort of helper method you have created that is shared by multiple tasks. So how to hide it? Simply prefix with _

For example,


def _apt_get(packages):
    '''Makes installing packages easier'''
    sudo('apt-get update')
    sudo('apt-get -y --force-yes install %s' % packages)

When listing the fabric tasks this method will no longer appear in the results.

Python Fabric How to Show or List All Available Tasks

Showing or displaying the available tasks inside a fabric fabfile is one of those things that almost everyone wants to do at some point and usually works out you can just request a task you know will not exist (usually found through a typo). However there is a way to list them built into fabric itself.

The below are all methods which can be used to display the currently defined tasks.


fab -l 
fab -list
fab taskthatdoesnotexist

Try any of the above where a fabfile is located and be presented with a list of all the available tasks.

Set Ubuntu Linux Swapfile Using Python Fabric

Annoyingly most cloud providers have an irritating habit of not adding any swap memory to any instance you spin up. Probably because if they added swap to the instance the disk size would appear to be smaller then it is or if they had a dedicated swap partition they would have to bear the cost or again use some of your disk space.

Thankfully adding swap to your Ubuntu linux instance is fairly easy. The following task when run will check if a swapfile already exists on the host and if not create one, mount it and set it to be remounted when the instance is rebooted. It takes in a parameter which specifies the size of the swap in gigabytes.


def setup_swapfile(size=1):
    if fabric.contrib.files.exists('/swapfile') == False:
        sudo('''fallocate -l %sG /swapfile''' % (size))
        sudo('''chmod 600 /swapfile''')
        sudo('''mkswap /swapfile''')
        sudo('''swapon /swapfile''')
        sudo('''echo "/swapfile   none    swap    sw    0   0" >> /etc/fstab''')

BTW I am writing a book about how to Automate your life using Python Fabric click the link and register your interest for when it is released.

Python Fabric Set Host List at Runtime

With the advent of cloud computing where you spin up and tear down servers at will it becomes extremely useful to pick the hosts you want fabric to run on at runtime rather then through the usual env.hosts setting. This allows you to query your servers through your cloud providers API without having to maintain a list. This can be a more powerful and flexible technique then using roles and in a devops world can save you a lot of time.

The trick is to know that when fabric runs any outgoing SSH connection is only made when the first put/get/sudo/run command is made. This means you can change env.hosts before this time and target whatever machines you want.

For the example below we are going to run a command on all of our servers after getting a full IP list from our fictitious cloud provider through an API call using the excellent Python requests module.

Firstly lets define a task which will set our env.hosts to all of the servers in our cloud.


def all():
    req = requests.get('https://api.mycloud.com/v1/server/list', headers={ 'API-Key': 'MYAPIKEY' });
    serverlist = json.loads(req.text)

    if len(serverlist) == 0:
        evn.hosts = []
        return

    env.hosts = [server['public_ip'] for server in serverlist]

The above makes a HTTPS call using our API Key and loads the JSON response into an object. Then depending on if any servers exist or not we loop through pulling out the public ip address for all our servers and assign that to our environment hosts.

Now we can call it with a simple uname function to get the output for all of the servers inside our cloud.


$ fab all hostname
[box1] run: uname -s
[box1] out: box1
[box2] run: uname -s
[box2] out: box2

Done.
Disconnecting from box1... done.
Disconnecting from box2... done.

You can create individual tasks for each group of servers you control using this technique but that’s not very dry (don’t repeat yourself) or neat. Lets modify our all task to accept a parameter so we can filter down our servers at run time.


def all(server_filter=None):
    req = requests.get('https://api.mycloud.com/v1/server/list', headers={ 'API-Key': 'MYAPIKEY' });
    serverlist = json.loads(req.text)

    if len(serverlist) == 0:
        evn.hosts = []
        return
    
    if filter:
        env.hosts = [server['public_ip'] for server in serverlist if server['tag'] == server_filter]
    else:
        env.hosts = [server['public_ip'] for server in serverlist]

We changed our method to accept a parameter which is by default set to none which we can use to filter down our servers based on a tag. If your cloud providers API is sufficiently powerful you can even change the request itself to handle this use case and save yourself the effort of filtering after the return.

To call our new method you need to pass the filter like the following example.


$ fab all:linux hostname
[box1] run: uname -s
[box1] out: box1
[box2] run: uname -s
[box2] out: box2

Done.
Disconnecting from box1... done.
Disconnecting from box2... done.

BTW I am writing a book about how to Automate your life using Python Fabric click the link and register your interest for when it is released.

searchcode server released

searchcode server the downloadable self hosted version of searchcode.com is now available. A large amount of work went into the release with a variety of improvements based on feedback from the general beta releases.

searchcode server

searchcode server has a number of advantages over searchcode.com that will eventually be back-ported in. The full list of things to check out is included below,

  • New Single Page Application UI for smooth search experience
  • Ability to split on terms so a search for “url signer” will match “UrlSigner”
  • Massively improved performance 3x in the worst case and 20x in the best
  • Configurable through UI and configuration
  • Spelling suggestion that learns from your code

A few things of note,

  • Java 8 application built using Lucene and Spark Framework
  • Designed to work any server. The test bench server is a netbook using an Intel Atom CPU and searches return in under a second
  • Scales to Gigabytes of code and thousands of repositories
  • Works on Linux, OSX and Windows

Be sure to check it out!

searchcode server
searchcode server
searchcode server

searchcode server

A month or so ago I started collection emails on searchcode.com to determine if there was enough interest in a downloadable version of searchcode. The results were overwhelmingly positive. The email list grew far beyond what I would have expected, and this was in the first month. As such I have been working in this downloadable version of searchcode which will probably be called searchcode server.

Progress has been reasonably straight forward consider that searchcode.com is written using mostly Python and searchcode server is mostly Java. The main reason for choosing Java is that I really wanted searchcode server to be a self contained application which could be downloaded and run without the configuration and setup of additional services.

At present it is surprisingly workable. You can input repositories (git only at this point) to be indexed and after a short amount of time they will be searchable via the main interface. A few screenshots are included at the end of this post for those curious.

There is still time to sign up and be one of the first to receive access. Being on the sign up list will also give you a discount when it is actually released if you need something greater than the community edition. To register your interest use the form below, or visit the searchcode server product page.

This has been released and you can now get the actual product.

Screen Shot 2015-12-29 at 1.04.54 pm

Screen Shot 2015-12-29 at 1.11.30 pm

Pi-Hole for Ubuntu 14.04

Because of the fact that I personally work for an ad supported company and that searchcode.com is currently supported via third party advertising I tend to keep an eye on the state of ad blockers on the web.

Most people probably know about adblockplus and other browser extensions however there are other ways to block ad’s on ones network. One that I had previously read about was setting up your own Bind9 server on a server and adding custom rules to block them at a DNS level. Other the last week I had been playing around with this but since I am not a bind expert I was unable to get it working in a satisfactory way.

However the following article about blocking all ads using a Raspberry Pi appeared on my radar. I don’t have a Raspberry Pi, but I did have an old netbook (Asus Eee 1000HA) lying around that I was trying to find some use for. I had previously set it up with Ubuntu 14.04 and had it running under the house running OwnCloud as a test. I thought it might be a good candidate for this sort of thing.

The install was pretty easy and as simple as following the guide on http://pi-hole.net/ It says that you need to be using Raspbian but works perfectly for me. Thankfully I have a reasonably good router (D7000 which I can highly recommend) and once the setup was done I pointed its DNS at the new server and sat back for things to start working. It did. Flawlessly.

I think the advertising industry is in for a rude shock. When these devices are as cheap as this and as simple to install its only a matter of time before they become a built in to the router itself or a plug and play.

searchcode local

I am going to copy the searchcode pitch itself below quickly before explaining it a bit further.

“searchcode offers powerful code search over billions of lines of open source code. Imagine what it could do with your private repositories.

There have been requests to offer a downloadable version of searchcode. Given enough interest a downloadable hostable version of searchcode will be offered. Register your email below to register your interest.

Note that there would be a free Community version available for all users as well as paid version offering support. Functionality would remain the same across all versions. This would be similar to how Octopus Deploy is offered.”

In short I am considering writing a hostable version of searchcode. Most likely it would consist of a Java application one could download and use to get similar results to searchcode.com itself (probably at smaller scale however).

Rather then actually commit to several months worth of work however I have put a message on searchcode asking for those interested to register their interest. If it sounds like something you would like please register.

I have no signup target numbers in mind or product costs etc… but I suspect given over 100 sign-ups I will actually go forth and implement.

I should note that this is something I have been highly resistant towards for a long time as I do not really want to get into enterprise sales cycles.

Anyway in a months time if there are enough signups I will push forward and release an initial version to those who have signed up. Any one who does will get free access on the beta list and discounts on the final version (should they need something more powerful then the community edition).

This has been released and you can now get the actual product.