Regular Expressions are Fast, Until they Aren’t

TL/DR: Regular expressions are fast, until they aren’t. How I got a 20x performance by switching to string functions.

With the new version of one of the main things I wanted to address was performance. The previous version had all sorts of performance issues which were not limited to the usual suspects such as the database or search index.

When developing the new version one of the tasks listed in my queue was to profile search operations for anything slowing things down. Sadly I have since lost the profile output but observed that one of the main speed culprits is the format_results function inside the code model. For most queries I tried while it was the slowest operation it wasn’t worth optimising simply because its impact was so low. I did however keep it in the back of my mind that if there were any performance issues it would be the first thing to look at.

The final step in any profiling however is to load some production data and run a load test. My personal preference being to generate a heap of known URL’s and tell a tool like Siege to go crazy with them. The results showed that 95% of pages loaded very quickly, but some took over 30 seconds. These instantly prompted further investigation.

A quick look at the profiler showed that the “back of mind function” was now becoming a serious issue. It seemed that for some regular expressions over some data types the worst case time was very bad indeed.

All of a sudden that function has become a major bottleneck.This needed to be resolved, to fix the worst case without selling out the best case which was very fast indeed. To get an understanding of what the function does you need to understand how searchcode works. When you search for a function of snippet searchcode tries to search for something that matches exactly what you are looking for first, and something containing anything in your query second. This means you end up with two match types, exact matches and fuzzy matches. The results are then processed by firstly trying to match the query exactly, and then going for a looser match.

This was implemented initially though two regular expressions like the below,

exact_match = re.compile(re.escape(search_term), re.IGNORECASE)
loose_match = re.compile('|'.join([re.escape(x) for x in search_term.split(' ')], re.IGNORECASE)

As you can see they are compiled before being handed off to another function which uses them for the actual matching. These are fairly simple regular expressions with the first just looking for any match and the second a large OR match. Certainly you would not expect them to cause any performance issues. Reading the following on stack overflow regarding the differences certainly seems to suggest that unless you are doing thousands of matches the performance should be negligible.

At heart searchcode is just a big string matching engine, and it does many thousands or hundreds of thousands of match operations for even a simple search. Since I don’t actually use of the power of regular expressions the fix is to change the code so that we pass in an array of terms to search for and use a simple Python in operator check.

exact_match = [search_term.lower()]
loose_match = [s.lower().strip()
                for s in search_term.split(' ')]

The results? Well remember we want to improve the worst case without selling out the best case, but the end result was pages that were taking nearly a minute to return were coming back in less than a second. All other queries seemed to come back either in the same time or faster.

So it turns our regular expressions are fast most of the time. Certainly I have never experienced and performance issues with them up till now. However at edge cases like the one in searchcode you can hit a wall and its at that point you need to profile and really re-think your approach.

Rebound Project

As I mentioned in the previous entry I had started work on a new project I called portfold. Built and released without fanfare I have quietly killed it before even the month is out. Why? I realise now that it was a rebound project similar to a rebound relationship. I had been getting a little down on and wanted to branch out to some new technology. Once done though the itch was scratched and now I am back to working on searchcode again.

There have been a few long standing issues with searchcode that I have finally tracked down and fixed. Quickly I am going to outline what each was and what I did about it.

The first issue was that if you filtered down the results using the right hand controls that they would be lost the moment you paged through the results. I cannot remember why I didn’t implement this the first time thought I suspect it was due to issues with the code dealing with non existant filters which I had since fixed.

Another issue was that when paging thorough results sometimes searchcode would report a page where none existed or when you paged through only had a few results rather then the 20 that should be there. This one took a long time to track down. The root cause turned out to be a slight difference between how code is indexed vs searched. When searchcode indexes it performs various splitting operations over the code to ensure that a search for will find results with that exact term. It also splits it so a search for api duckduck go will also work. This process works when performing a search as well, however its not the same logic due to one being done in sphinxes index pipeline and the other in Python logic. The result is that I neglected to implement the split on the singe . in the search logic. Very annoying to say the least. I had been getting bug reports for a while about it and had built a very large bug report on it. One of the main issues with it was being unable to replicate it on my local machine. This is due to the size of the data which has outgrown the ability for a single machine to deal with.

One other annoying issue was the parsers for source code had a nasty habit of crashing the instance they were running on which required a reboot to resolve. I run these on the smallest possible digital ocean instances. Turns out that by default these do not have any swap space, which ended up causing the issue. Thankfully adding swap space is relatively easy and they have now been running for days without issue. I have since queued up another million or so projects to index and should be searchable soon.

The last issue and one I am still addressing is performance. I was a little disturbed by how long it was taking for even cached results to return. A bit of poking around concluded that all of the ajax request to get the similar results where flooding the available gunicorn workers I had on the backend. This needed to be rectified. The first step was to setup nginx to read cached results directly from memcached and avoid hitting the backend at all. The second was to increase the number of workers slightly. The result is that the page does load a lot faster for the average request now with less load on the server.

In addition to the above fixes I have also added a new piece of functionality. You can now filter by user/repo name. This works in a similar manner to the existing repo filter however you need to supply the username first and delimit with a forward slash. An example would be a search for select repo:boyter/batf

All in all I am happy with progress so far. The current plan is to focus on upgrading the API such that everything is exposed for those who wish to build on it. This will include public API’s for the following, getting a code result, finding related results, advanced filters, and pretty much everything required to in theory create a clone of searchcode. In addition I want to expand the index as much as possible by pulling in bitbucket and more of githumb. As always any feedback is greatly appreciated and I do try to implement requests.

Portfold: Topic Research Software

Every few years I have a habit of starting a new project. The goal always being to scratch my own itch and learn some new technology in the process. While I am still working on I really wanted to play with what I had learnt there and apply it to something new.

You can view it at

Recently I have been taking an interest in various topics such as “Oil Gas Pipeline Failure Rates” and “Hydroelectric Dam Environmental Impacts” (both generated using Portfold). My standard workflow was enter a search term into my favorite search engine and then click through the results looking for the interesting information. Extremely time consuming I was looking to find a better way to do it.

The problem as such is that searching for information presents a collection of related blue links. What do you do then?

The result is a project called Portfold. Portfold is form of Topic Research Software. The workflow is pretty simple. You enter some search terms, and a list of results are displayed. Wait for a few moments and the results are pulled down, with all of the information extracted and in a print friendly format. Information such as the number of words and if the result is a PDF are displayed inline. It was built using Django with MySQL for the backend and AngularJS for the frontend making it a single page application.

I am not aware of anyone else solving this issue in this way. This may or may not be a good thing but I am gaining a lot of value from it.

You can view it at however you will need an account. Email me at if you would like one, however keep in mind I will be looking to charge for it at some point in the future simply because there are direct costs to running the service.

If you would like to see some examples of what sort of reports Portfold can produce please see the following detailed report outputs.

Prescription Drugs Suicide Link
Dichloro Diphenyl Trichloroethane
Backup And Recovery Approaches Using Aws
Australia Privacy Laws
Oil Gas Pipeline Failure Rates
Hydroelectric Dam Environmental Impacts
Democrat Candidates Republican Candidates Equal Pay
Migration And Maritime Powers Legislation Amendment (Resolving The Asylum Legacy Caseload) Bill 2014
Man Haron Monis

Why isn’t 100% free software

The recent surge in attention to searchcode from the Windows 9/10 naming fiasco resulted in a lot of questions being raised about searchcode’s policy about free software (open source). While the policy is laid out on the about page some have raised the issue of the ethics about using such a website which is not 100% free (as in freedom).

For the purposes of the rest of this post “free software” is going to refer to software defined as not infringing freedom rather then free as in beer.

Personally I believe in the power of free software. Personally I have contributed to a projects over the years, either submitting bug reports, patches/pull requests, supplying feedback or releasing my own code under a permissive license. With searchcode my policy has been to contribute back where it makes personal sense (I will get to the personal part later). This includes so far opening all of the documentation parsers into the DuckDuckGo Fathead project, releasing a simple bug tracker I was using and by promising to donate 10% of profits to free software projects which I find most beneficial.

The personal portion is the important one to take note of here. The main reason why searchcode is not 100% free software is the support burden it would create for myself. I am running searchcode 100% on my own time, using my own tools, servers, software and hardware paid for by myself. All of this takes part outside my day job. I really do not have the time to deal with the support overhead that is bound to come from opening such a project.

How do I know it will create such an overhead? Consider the following personal examples. Search for “decoding captchas” in your choice of search engine. Somewhere near the top you will find an article hosted on this website I wrote several years ago. The article was written so that anyone trying to decode a CAPTCHA would have a good foundation of ideas and code to work from. To date, this single article has resulted in nearly an email every day from someone asking for assistance. This would not normally be a problem, except that 99% of the emails consist of either questions that the article already answered, or something to the effect of “I want to decode a captcha, plz supply the codes in VB”. Polite responses to such emails where I state I will not do this even if I were being and that everything required is already available have resulted in abuse and threats.

Another example is from the following collection of posts and the source on github. This small collection of posts also produces a lot of email from people asking questions. To reduce the overhead I ended up writing a follow up post which I can redirect a lot of the questions to. Even with both these resources I still get a lot of questions about how they can just set things up and have it working.

My point here isn’t to complain. I wrote the above knowing I would get requests for help. I usually amend the post in question when asked a few times for details. Generally I enjoy responding to each request. The issue is that searchcode is a lot more complicated then the above projects combined and the support requests that are bound to come from opening it with no obvious benefit to me outweigh any benefits I am likely to see. I could indeed write documentation for this but since I do not believe in infrastructure as text documents I prefer to keep it all as code.

You might note that I am being purely selfish about this, and that opening is not necessarily for my benefit but the benefit of others and you would be right. However you also need to remember that it shouldn’t be detrimental to me either. Keep in mind searchcode makes no money and is a side project which fills a need I had and which I am happy working on on a day to day basis.

That said, if I ever get bored of searchcode and close it down I promise to release 100% of the source as free software. I also will revisit this current policy if searchcode ever produces income beyond covering hosting expenses.

I hope this clears up some of the questions that keep popping up. If you disagree (and I am sure many do) feel free to email me stating outlining your reasons. I am not above changing my mind if delivered a well reasoned argument.

Interesting Code Comment

Found the following comment in some code I had modified a few years ago.

Just to set this up, its an existing application I had no hand in creating, and is a totally atrocity of 180,000 lines of untested code (and pretty much un-testable) which through the abuse of extension methods lives in a single class spread out across multiple files.

This is evil but necessary. For some reason people have put validation rules here rather then in the bloody ValidationHelper. Thanks to their incompetence or genius... we now have no idea if we add the extra validation in the correct place and call it here if it will work. Since this is also 180,000 lines of non tested nor testable code (without refactoring) I have no confidence in making any changes. Sure we have subversion but that dosnt allow us to code fearlessly ripping apart methods and refactoring since we have no test safety net.

I guess the obligatory car analogy would be driving down the highway, carrying nuclear waste, in an open container, in a snow storm, with acid/lsd/ice fueled drugie ninja bikies attacking you, while on fire, while juggling chainsaws, and all of a sudden you need to change the tyre. So much is going on that its you dont want to risk it and then when forced to do so 
you know its going to end up badly.

If you are still reading this then for the love of all things holy, help by refacting stuff so we can test it properly. The DAO layer should be fairly simple but everthing else is a shambles. 

Rant time over. Lets commit sin by adding more validation.

Feedback Loop

About a month ago managed to sit on the front page of Hacker News (HN) for most of a day and produced a lot of useful feedback for me to act on. You can read the full details here searchcode: A source code search engine

Between the HN feedback, some I received via tweets and from republished articles I got a list of things I needed to work on.

The first and main change requested was over the way searchcode was matching results. It was by default looking for exact matches. Hence if you searched for something like “mongodb find” it would look for that exact text. It was requested by quite a few people to change this. The expectation was that the matching would work like Githubs. This has now taken effect. A sample search that came up is included below with the new logic,

I believe the results are more in line with the expectation.

The second thing requested was that I point at the new Google endpoints for GWT and Android. This has been done and the code is currently sitting in the queue ready to be indexed. I expect this to take place in the next few days. In addition I have pulled in a lot of new repositories from Github and Bitbucket using their API’s. The number of projects now being indexed is well over 5 million and growing every day.

The last request came from the user chdir on HN. I hope they won’t mind but I have included their request below,

“I use sourcegraph occasionally and mostly rely on Gihub search. I wish the search has all those advanced refinement options that grep & Sublime Text search has. Some examples would be to use regex, search a word within a scope of lines, search within search results etc. Additionally, it’s very useful to be able to sort the search results by stars/forks. Sometimes I just want to see how popular projects have implemented a certain feature. A keyword based search isn’t enough for that.

I guess these features are very expensive & slow to implement but it would be super useful if it can be achieved. Source code search is for geeks so it is probably fair to say that a truly advanced & complex interface won’t turn away users.”

The above is actually one of the more difficult requests. However its suggestions are on my radar of things to do. To start with I have rolled out an experimental feature which displays matching results. One of the issues with codesearch is that being good developers there is a lot of duplicate code used in various projects. Since when you search for something like “jquery mobile” you don’t want to see the same file repeated thousands of times you need to work out the duplicate content and filter it out.

Sometimes however you want to see those results. Its a piece of functionality that existed in Google Code search which I had wanted implemented for a long time. Well it is now here. The duplicates are worked out using a few methods, matching MD5 hashes, file-name and a new hash I developed myself which converges the more similar the files are. Similar to simhash this new has however does not require any post calculation operations to determine if two files are a match. More details of this will come in a later post after I iron out all the kinks.

Anyway you can now see this functionality. Try searching for “jquery mobile” and look next to the title. You can see something along the lines of “Show 76 matches”


Clicking the link will expand out the matching files for this result. Each of the matching results shows the filename project and the location in the project. All of course are click-able and link to the duplicate file.


Lastly you can also do the same on the code page itself. Just click “Show 5 matches” on the top right of the result page to see a list of the matching files.


There is more to come in the next few weeks which I am excited about but for the moment I would love to get feedback on the above.

What is special about DDG

Since I am still bringing all my content together I thought I would pull in this post from Quora asking what is special about DuckDuckGo.

1. Privacy enabled by default. This certainly helped get traction when the NSA security revelations came around. DDG is not the only privacy conscious search engine but certainly one that pushes it as a feature more then others. See

2. !bang syntax. Remember back in the early days of Google they had a “Try this search on” and a list of search engines? !bang is that idea on steroids. This makes the cost of switching to DDG much lower then any other search engine because you are not locked in when its results are lacking.

3. Gabriel Weinburg (Creator) came up with a way to index the web for a fraction the cost of anyone else. I.E. use someone else’s index through web API’s such as Bing/Yahoo Boss. This means DDG can have an index in billions of pages without buying hundreds of machines and then crawling and indexing. Consider Cuil as an example. BTW I wrote more about this here Building a search engine? The most important feature you can add.

4. Persistence. Quite a few search engines based on Yahoo Boss and other API’s have come and gone, however DDG continues to be worked on. Just being around for 4 years gives it credibility.

5. DuckDuckHack. If you are a developer you can go to DuckDuckHack and add functionality you want. This may not sound that good, but because DDG already has traffic its a good incentive for start-ups and others to build on the DDG API to get some traction, which means they want to use DDG and promote it which fuels growth.

6. People. The people working on DDG are pretty awesome.

7. Uncluttered results. The results are pretty much just some links without too much fancy stuff going on.

Sphinx and searchcode

There is a rather nice blog post on the Sphinx Search blog about how searchcode uses sphinx. Since I wrote it I thought I would include a slight edited for clarity version below. You can read the original here.

I make it no secret that the indexer that powers searchcode is Sphinx Search which for those who do not know is a stand alone indexing and searching engine similar to Solr.

Since searchcode’s inception in 2010, Sphinx has powered the search functionality and provides the raw searching and faceting functionality across 19 billion lines of source code. Each document has over 6 facets and there are over 40 million documents in the index at any time. Sphinx serves over 500,000 queries a month from this with the average query returning in less than a second.

searchcode is an unusual beast in that while it doesn’t index as many documents as other large installations, it indexes a lot more data. This is due to the average document size being larger and the way source code is delimited. The result of these requirements is that the index when built is approximately 3 to 4 times larger than the data being indexed. The special transformation’s required are accomplished with a thin wrapper on top of Sphinx which modifies the text processing pipeline. This is applied when Sphinx is indexing and running queries. The resulting index is over 800 gigabytes in size on disk and when preloaded consumes over 25 gigabytes of RAM.

This is all served by a single i7 Quad Core server with 32 gigabytes of RAM. The index is distributed and split into 4 parts allowing all queries to run over network agents and scale out seamlessly. Because of the size of the index and how long this takes each part is only indexed every week and a small delta index is used to provide recent updates.

Every query run on searchcode runs multiple times as a method of improving results and avoiding cache rot. The first query run uses the sphinx ranking mode BM25 and and subsequent queries use SPH04. BM25 uses a little less CPU then SPH04 and hence new queries use it as return time to the user is important. All subsequent queries run as a offline asynchronous task which does some further processing and updates the cache so the next time the query is run the results are more accurate. Commonly ran queries are added the the asynchronous queue after the indexes have been rotated to provide fresh search results at all times. searchcode is currently very CPU bound and given the resources could improve search times 4x with very little effort simply by moving each of the the Sphinx indexes to individual machines.

searchcode updates to the latest stable version of Sphinx for every release. This has happened for every version from 0.9.8 all the way to 2.1.8 which is currently being used. There has never been a single issue with each upgrade and each upgrade has overcome an issue that was previously encountered. This stability is one of the main reasons for having chosen Sphinx initially.

The only issues encountered with Sphinx to date where some limits on the number of facets which has been resolved with the latest versions. Any other issue has been due to configuration issues which were quickly resolved.

In short Sphinx is an awesome project. It has seamless backwards compatibility, scales up to massive loads and still returns results quickly and accurately. Having since worked with Solr and Xapian, I would still choose Sphinx as searchcode’s indexing solution. I consider Sphinx as Nginx of the indexing world. It may not have every feature possible but its extremely fast and capable and the features it does have work for 99% of solutions.

Estimating Sphinx Search RAM Requirements

If you run Sphinx Search you may want to estimate the amount of RAM that it requires in order to per-cache. This can be done by looking at the size of the spa and spi files on disk. For any Linux system you can run the following command in the directory where your sphinx index(s) are located.

ls -la /SPHINXINDEX/|egrep "spa|spi"|awk '{ SUM += $5 } END { print SUM/1024/1024/1024 }'

This will print out the number of gigabytes required to store the sphinx index in RAM and is useful for guessing when you need to either upgrade the machine or scale out. It tends to be accurate to within 200 megabytes or so in my experience.