Portfold: Topic Research Software

Every few years I have a habit of starting a new project. The goal always being to scratch my own itch and learn some new technology in the process. While I am still working on searchcode.com I really wanted to play with what I had learnt there and apply it to something new.

You can view it at portfold.com

Recently I have been taking an interest in various topics such as “Oil Gas Pipeline Failure Rates” and “Hydroelectric Dam Environmental Impacts” (both generated using Portfold). My standard workflow was enter a search term into my favorite search engine and then click through the results looking for the interesting information. Extremely time consuming I was looking to find a better way to do it.

The problem as such is that searching for information presents a collection of related blue links. What do you do then?

The result is a project called Portfold. Portfold is form of Topic Research Software. The workflow is pretty simple. You enter some search terms, and a list of results are displayed. Wait for a few moments and the results are pulled down, with all of the information extracted and in a print friendly format. Information such as the number of words and if the result is a PDF are displayed inline. It was built using Django with MySQL for the backend and AngularJS for the frontend making it a single page application.

I am not aware of anyone else solving this issue in this way. This may or may not be a good thing but I am gaining a lot of value from it.

You can view it at portfold.com however you will need an account. Email me at bboyte01@gmail.com if you would like one, however keep in mind I will be looking to charge for it at some point in the future simply because there are direct costs to running the service.

If you would like to see some examples of what sort of reports Portfold can produce please see the following detailed report outputs.

Prescription Drugs Suicide Link
Dichloro Diphenyl Trichloroethane
Backup And Recovery Approaches Using Aws
Australia Privacy Laws
Oil Gas Pipeline Failure Rates
Hydroelectric Dam Environmental Impacts
Democrat Candidates Republican Candidates Equal Pay
Migration And Maritime Powers Legislation Amendment (Resolving The Asylum Legacy Caseload) Bill 2014
Man Haron Monis

Why searchcode.com isn’t 100% free software

The recent surge in attention to searchcode from the Windows 9/10 naming fiasco resulted in a lot of questions being raised about searchcode’s policy about free software (open source). While the policy is laid out on the about page some have raised the issue of the ethics about using such a website which is not 100% free (as in freedom).

For the purposes of the rest of this post “free software” is going to refer to software defined as not infringing freedom rather then free as in beer.

Personally I believe in the power of free software. Personally I have contributed to a projects over the years, either submitting bug reports, patches/pull requests, supplying feedback or releasing my own code under a permissive license. With searchcode my policy has been to contribute back where it makes personal sense (I will get to the personal part later). This includes so far opening all of the documentation parsers into the DuckDuckGo Fathead project, releasing a simple bug tracker I was using and by promising to donate 10% of profits to free software projects which I find most beneficial.

The personal portion is the important one to take note of here. The main reason why searchcode is not 100% free software is the support burden it would create for myself. I am running searchcode 100% on my own time, using my own tools, servers, software and hardware paid for by myself. All of this takes part outside my day job. I really do not have the time to deal with the support overhead that is bound to come from opening such a project.

How do I know it will create such an overhead? Consider the following personal examples. Search for “decoding captchas” in your choice of search engine. Somewhere near the top you will find an article hosted on this website I wrote several years ago. The article was written so that anyone trying to decode a CAPTCHA would have a good foundation of ideas and code to work from. To date, this single article has resulted in nearly an email every day from someone asking for assistance. This would not normally be a problem, except that 99% of the emails consist of either questions that the article already answered, or something to the effect of “I want to decode a captcha, plz supply the codes in VB”. Polite responses to such emails where I state I will not do this even if I were being and that everything required is already available have resulted in abuse and threats.

Another example is from the following collection of posts and the source on github. This small collection of posts also produces a lot of email from people asking questions. To reduce the overhead I ended up writing a follow up post which I can redirect a lot of the questions to. Even with both these resources I still get a lot of questions about how they can just set things up and have it working.

My point here isn’t to complain. I wrote the above knowing I would get requests for help. I usually amend the post in question when asked a few times for details. Generally I enjoy responding to each request. The issue is that searchcode is a lot more complicated then the above projects combined and the support requests that are bound to come from opening it with no obvious benefit to me outweigh any benefits I am likely to see. I could indeed write documentation for this but since I do not believe in infrastructure as text documents I prefer to keep it all as code.

You might note that I am being purely selfish about this, and that opening is not necessarily for my benefit but the benefit of others and you would be right. However you also need to remember that it shouldn’t be detrimental to me either. Keep in mind searchcode makes no money and is a side project which fills a need I had and which I am happy working on on a day to day basis.

That said, if I ever get bored of searchcode and close it down I promise to release 100% of the source as free software. I also will revisit this current policy if searchcode ever produces income beyond covering hosting expenses.

I hope this clears up some of the questions that keep popping up. If you disagree (and I am sure many do) feel free to email me stating outlining your reasons. I am not above changing my mind if delivered a well reasoned argument.

Interesting Code Comment

Found the following comment in some code I had modified a few years ago.

Just to set this up, its an existing application I had no hand in creating, and is a totally atrocity of 180,000 lines of untested code (and pretty much un-testable) which through the abuse of extension methods lives in a single class spread out across multiple files.

This is evil but necessary. For some reason people have put validation rules here rather then in the bloody ValidationHelper. Thanks to their incompetence or genius... we now have no idea if we add the extra validation in the correct place and call it here if it will work. Since this is also 180,000 lines of non tested nor testable code (without refactoring) I have no confidence in making any changes. Sure we have subversion but that dosnt allow us to code fearlessly ripping apart methods and refactoring since we have no test safety net.

I guess the obligatory car analogy would be driving down the highway, carrying nuclear waste, in an open container, in a snow storm, with acid/lsd/ice fueled drugie ninja bikies attacking you, while on fire, while juggling chainsaws, and all of a sudden you need to change the tyre. So much is going on that its you dont want to risk it and then when forced to do so 
you know its going to end up badly.

If you are still reading this then for the love of all things holy, help by refacting stuff so we can test it properly. The DAO layer should be fairly simple but everthing else is a shambles. 

Rant time over. Lets commit sin by adding more validation.

Feedback Loop

About a month ago searchcode.com managed to sit on the front page of Hacker News (HN) for most of a day and produced a lot of useful feedback for me to act on. You can read the full details here searchcode: A source code search engine

Between the HN feedback, some I received via tweets and from republished articles I got a list of things I needed to work on.

The first and main change requested was over the way searchcode was matching results. It was by default looking for exact matches. Hence if you searched for something like “mongodb find” it would look for that exact text. It was requested by quite a few people to change this. The expectation was that the matching would work like Githubs. This has now taken effect. A sample search that came up is included below with the new logic,


I believe the results are more in line with the expectation.

The second thing requested was that I point at the new Google endpoints for GWT and Android. This has been done and the code is currently sitting in the queue ready to be indexed. I expect this to take place in the next few days. In addition I have pulled in a lot of new repositories from Github and Bitbucket using their API’s. The number of projects now being indexed is well over 5 million and growing every day.

The last request came from the user chdir on HN. I hope they won’t mind but I have included their request below,

“I use sourcegraph occasionally and mostly rely on Gihub search. I wish the search has all those advanced refinement options that grep & Sublime Text search has. Some examples would be to use regex, search a word within a scope of lines, search within search results etc. Additionally, it’s very useful to be able to sort the search results by stars/forks. Sometimes I just want to see how popular projects have implemented a certain feature. A keyword based search isn’t enough for that.

I guess these features are very expensive & slow to implement but it would be super useful if it can be achieved. Source code search is for geeks so it is probably fair to say that a truly advanced & complex interface won’t turn away users.”

The above is actually one of the more difficult requests. However its suggestions are on my radar of things to do. To start with I have rolled out an experimental feature which displays matching results. One of the issues with codesearch is that being good developers there is a lot of duplicate code used in various projects. Since when you search for something like “jquery mobile” you don’t want to see the same file repeated thousands of times you need to work out the duplicate content and filter it out.

Sometimes however you want to see those results. Its a piece of functionality that existed in Google Code search which I had wanted implemented for a long time. Well it is now here. The duplicates are worked out using a few methods, matching MD5 hashes, file-name and a new hash I developed myself which converges the more similar the files are. Similar to simhash this new has however does not require any post calculation operations to determine if two files are a match. More details of this will come in a later post after I iron out all the kinks.

Anyway you can now see this functionality. Try searching for “jquery mobile” and look next to the title. You can see something along the lines of “Show 76 matches”


Clicking the link will expand out the matching files for this result. Each of the matching results shows the filename project and the location in the project. All of course are click-able and link to the duplicate file.


Lastly you can also do the same on the code page itself. Just click “Show 5 matches” on the top right of the result page to see a list of the matching files.


There is more to come in the next few weeks which I am excited about but for the moment I would love to get feedback on the above.

What is special about DDG

Since I am still bringing all my content together I thought I would pull in this post from Quora asking what is special about DuckDuckGo.

1. Privacy enabled by default. This certainly helped get traction when the NSA security revelations came around. DDG is not the only privacy conscious search engine but certainly one that pushes it as a feature more then others. See https://duckduckgo.com/privacy

2. !bang syntax. Remember back in the early days of Google they had a “Try this search on” and a list of search engines? !bang is that idea on steroids. This makes the cost of switching to DDG much lower then any other search engine because you are not locked in when its results are lacking.

3. Gabriel Weinburg (Creator) came up with a way to index the web for a fraction the cost of anyone else. I.E. use someone else’s index through web API’s such as Bing/Yahoo Boss. This means DDG can have an index in billions of pages without buying hundreds of machines and then crawling and indexing. Consider Cuil as an example. BTW I wrote more about this here Building a search engine? The most important feature you can add.

4. Persistence. Quite a few search engines based on Yahoo Boss and other API’s have come and gone, however DDG continues to be worked on. Just being around for 4 years gives it credibility.

5. DuckDuckHack. If you are a developer you can go to DuckDuckHack and add functionality you want. This may not sound that good, but because DDG already has traffic its a good incentive for start-ups and others to build on the DDG API to get some traction, which means they want to use DDG and promote it which fuels growth.

6. People. The people working on DDG are pretty awesome.

7. Uncluttered results. The results are pretty much just some links without too much fancy stuff going on.

Sphinx and searchcode

There is a rather nice blog post on the Sphinx Search blog about how searchcode uses sphinx. Since I wrote it I thought I would include a slight edited for clarity version below. You can read the original here.

I make it no secret that the indexer that powers searchcode is Sphinx Search which for those who do not know is a stand alone indexing and searching engine similar to Solr.

Since searchcode’s inception in 2010, Sphinx has powered the search functionality and provides the raw searching and faceting functionality across 19 billion lines of source code. Each document has over 6 facets and there are over 40 million documents in the index at any time. Sphinx serves over 500,000 queries a month from this with the average query returning in less than a second.

searchcode is an unusual beast in that while it doesn’t index as many documents as other large installations, it indexes a lot more data. This is due to the average document size being larger and the way source code is delimited. The result of these requirements is that the index when built is approximately 3 to 4 times larger than the data being indexed. The special transformation’s required are accomplished with a thin wrapper on top of Sphinx which modifies the text processing pipeline. This is applied when Sphinx is indexing and running queries. The resulting index is over 800 gigabytes in size on disk and when preloaded consumes over 25 gigabytes of RAM.

This is all served by a single i7 Quad Core server with 32 gigabytes of RAM. The index is distributed and split into 4 parts allowing all queries to run over network agents and scale out seamlessly. Because of the size of the index and how long this takes each part is only indexed every week and a small delta index is used to provide recent updates.

Every query run on searchcode runs multiple times as a method of improving results and avoiding cache rot. The first query run uses the sphinx ranking mode BM25 and and subsequent queries use SPH04. BM25 uses a little less CPU then SPH04 and hence new queries use it as return time to the user is important. All subsequent queries run as a offline asynchronous task which does some further processing and updates the cache so the next time the query is run the results are more accurate. Commonly ran queries are added the the asynchronous queue after the indexes have been rotated to provide fresh search results at all times. searchcode is currently very CPU bound and given the resources could improve search times 4x with very little effort simply by moving each of the the Sphinx indexes to individual machines.

searchcode updates to the latest stable version of Sphinx for every release. This has happened for every version from 0.9.8 all the way to 2.1.8 which is currently being used. There has never been a single issue with each upgrade and each upgrade has overcome an issue that was previously encountered. This stability is one of the main reasons for having chosen Sphinx initially.

The only issues encountered with Sphinx to date where some limits on the number of facets which has been resolved with the latest versions. Any other issue has been due to configuration issues which were quickly resolved.

In short Sphinx is an awesome project. It has seamless backwards compatibility, scales up to massive loads and still returns results quickly and accurately. Having since worked with Solr and Xapian, I would still choose Sphinx as searchcode’s indexing solution. I consider Sphinx as Nginx of the indexing world. It may not have every feature possible but its extremely fast and capable and the features it does have work for 99% of solutions.

Estimating Sphinx Search RAM Requirements

If you run Sphinx Search you may want to estimate the amount of RAM that it requires in order to per-cache. This can be done by looking at the size of the spa and spi files on disk. For any Linux system you can run the following command in the directory where your sphinx index(s) are located.

ls -la /SPHINXINDEX/|egrep "spa|spi"|awk '{ SUM += $5 } END { print SUM/1024/1024/1024 }'

This will print out the number of gigabytes required to store the sphinx index in RAM and is useful for guessing when you need to either upgrade the machine or scale out. It tends to be accurate to within 200 megabytes or so in my experience.

searchcode next

There seems to be a general trend with calling the new release of your search engine next (see Iconfinder and DuckDuckGo), and so I am happy to announce and write about searchcode next.

As with many project searchcode has some very humble beginnings. It originally started out as a “I need to do something” side project originally just indexing programming documentation. Time passed and the idea eventually evolved into a search engine for all programming documentation, and then with Google Code search being shut down a code search engine as well.

searchcode was running on a basic LAMP stack. Ubuntu Linux as the server, PHP, MySQL and Apache. APC Cache was installed to speed up PHP with some memcached calls to take heat off the database. The CodeIgniter PHP framework was used for the front end design with a lot of back-end processes written in Python.

Never one to agree with the advice that you should never rewrite your code I did exactly that. Searchcode is now a Django application. The reasons for this are varied but essentially it was running on an older server (Ubuntu 10.04) and a now defunct web framework CodeIgniter. I figured since I had to rewrite portions anyway I may as well switch over to a language that I prefer and want to gain more experience in.

As mentioned searchcode is now a Django application but still backed by by MySQLSphinx provides the searching index and a healthy mix of Rabbitmq and Celery for back-end tasks. Deployments and server config is automated through the use of Fabric and Memcached is included for speed. Of course some of the original back-end processes still exist as cron jobs but are slowly being moved over to Celery tasks. It still runs on Ubuntu server since that’s the Linux distribution I am most comfortable with.

Of particular note, searchcode runs on two servers which could probably be reduced to a single one at its current size but allows for growth. Both are dedicated boxes provided by Hetzner. Both are 4 core i7 boxes with 3 terabytes of disk space each. The only difference between them is the first having 16 gigabytes of ram and the index having 32 gigabytes. The first runs the web-server nginx talking through gunicorn to django, the database and memcache. The second exclusively runs the sphinx index (more details about sphinx to come).

Load averages before the move were rather chaotic. I had seen spikes up to 100 which for a 4 core box is pretty horrible. The new version even under extreme pressure (from a Siege test and GoogleBot) maxes out about 2, with the search spiking to 4 for brief periods if a lot of un-cached searches hit all of a sudden. The other advantage is that searches come back much faster with the new setup. Average page responses have dropped considerably.

Heavily unit tested the application runs through a battery of tests before each deployment including unit, integration and smoke which do a reasonable job of catching issues out before being deployed. Of course the other benefit being that the code-base is testable which is generally a good thing.

There is more to come and I am excited about the future of searchcode.

Chatbot Nonsense

There has been a lot of coverage recently about a chat-bot “Eugene Goostman” passing a variance of the Turing test by convincing 30% of the testers that it was indeed a human by posing as Ukrainian 13 year old boy (to make misspellings and grammar mistakes forgivable I suppose).

Naturally I had to give it a try and frankly I can’t see how something like this could convince anyone that its a living human being. I asked a few questions such as the following “Whereabouts in the Ukraine do you live?” the response being “It is a country in SE Europe that’s all I can tell you” which is not exactly promising.

For fun I scripted it up against a copy of the Eliza chat-bot and had them talk among themselves. A quick excerpt is included below and is rather amusing.

Eliza: Is it important to you that your attitude to I ?
Eugene: It is highly possible. It is highly possible.
Eliza: Lets discuss further why your feelings right now.
Eugene: Just because 2 plus 2 is 5! Wonna ask me something more?
Eliza: Don't any other reasons come to mind ?
Eugene: Is it a kind of manace? Maybe, let's talk about something else? What would you like to discuss?
Eliza: Why the uncertain tone ?
Eugene: Why? Just because! :) Why? Just because! :)
Eliza: Why do you ask ?
Eugene: Oh, but why not? Well, let's go on though.
Eliza: Are such questions much on your mind ?
Eugene: Who knows... Well, let's go on though.
Eliza: Have you asked such questions before ?
Eugene: I would rather not talk about it if you don't mind. Oh, what a fruitful conversation;-)
Eliza: What would you do if I don't mind ?

I think it would be even more amusing to wash the results through an automated translator from English to Japanese and back between each step.