Sphinx Real Time Index How to Distribute and Hidden Gotcha

I have been working on real time indexes with Sphinx recently for the next version of searchcode.com and ran into a few things that were either difficult to search for or just not covered anywhere.

The first is how to implement a distributed search using real time indexes. It’s actually done the same way you would normally create an index. Say you had a single server with 4 index shards on it and you wanted to run queries against it. You could use the following,

index rt
{
    type = distributed
    local = rt1
    agent = localhost:9312:rt2
    agent = localhost:9312:rt3
    agent = localhost:9312:rt4
}

You would need to have each one of your indexes defined (only one is added here to keep the example short)

index rt1
{
    type = rt1
    path = /usr/local/sphinx/data/rt1
    rt_field = title
    rt_field = content
    rt_attr_uint = gid
}

Using the above you would be able to search across all of the shards. The trick is knowing that to update you need to update each shard yourself. You cannot pass documents to the distributed index but instead must make a separate update to each shard. Usually I split sphinx shards based on a query like the following,

SELECT cast((select id from table order by 1 desc limit 1)/4 as UNSIGNED)*2, \
         cast((select id from table order by 1 desc limit 1)/4 as UNSIGNED)*3 \
         FROM table limit 1

Where the 4 is the number of shards and the multiplier splits the shards out. It’s performant due to index use. However for RT I suggest a simple modulas operator % against the ID column for each shard as it allows you to continue to scale out to each shard equally.

The second issue I ran into was that when defining the attributes and fields you must define all the fields before the uints. The above examples work fine but the below is incorrect. I couldn’t find this mentioned in the documentation.

index rt
{
    type = rt
    path = /usr/local/sphinx/data/rt
    rt_attr_uint = gid # this should be below the rt_fields
    rt_field = title
    rt_field = content
}

Explaining VarnishHist – What Does it Tell Us

The varnishhist tool is one of the most underused varnish tools that come with your standard varnish install. Probably because of how it appears at first glance.

In short, you want as many | symbols as possible and you want everything far toward the left hand side. The closer to the left the faster the responses are regardless if they are cached or not. The more | symbols then more items were served from cache.

A small guide,

'|' is cache HIT
'#' is cache MISS
'n:m' numbers in left top corner is vertical scale
'n = 2000' is number of requests that are being displayed (from 1 to 2000)

The X-axis is logarithmic time between request request from kernel to Varnish and response from Varnish to kernel.

The times on the X-axis are as such,

1e1 = 10 sec
1e0 = 1 sec
1e-1 = 0.1 secs or 100 ms (milliseconds)
1e-2 = 0.01 secs or 10 ms
1e-3 = 0.001 secs or 1 ms or 1000 µs (microseconds)
1e-4 = 0.0001 secs or 0.1 ms or 100 µs
1e-5 = 0.00001 secs or 0.01 ms or 10 µs
1e-6 = 0.000001 secs or 0.001 ms or 1 µs or 1000 ns (nanoseconds)

Below is the varnishhist for searchcode.com showing that while most responses are served in about 100ms not many are cached. This can mean one of a few things.

  • The responses are not cache-able and you need to adjust the back-end responses to have the correct headers (or override the settings with VCL config).
  • The cache timeout for the back-end responses isn’t high enough to ensure that later requests are served from cache.
  • There isn’t a large enough cache to hold all the responses (that’s the problem in this case).
1:20, n = 2000




                                  ##
                                  ##
                                  ##
                                  ##
                                  ##
                                  ##
                                  ##
                                 ###
                                 ####
            |                    ####
            |                    ####
            |                    ####
            |                    #####
            |                    #####
            |                    #####
            |                   #######
            |                   #######
            ||  |    #      #   ##########
+------+------+------+------+------+------+------+------+------
|1e-6  |1e-5  |1e-4  |1e-3  |1e-2  |1e-1  |1e0   |1e1   |1e2

MySQL Dump Without Impacting Queries

Posted more for my personal use (I have to look it up every time) but here is how to run a mysqldump without impacting performance on the box. It sets the ionice and nice values to be as low as possible (but still run) and uses a single transaction and ups the max packet size for MySQL.

ionice -c2 -n7 nice -n19 mysqldump -u root -p DATABASE --single-transaction --max_allowed_packet=512M > FILENAME

Exporting Documents from KnowledgeTree 3.7.0.2

I was recently tasked with exporting a large collection of documents from KnowledgeTree (KT) for a client. The collection was too large to use the download all functionality and too wide to attempt to export each folder individually.

I had played around with the WebDav connection that KT provides but it either didn’t work or was designed deliberately to not allow exporting of the documents.

I looked at where the documents were  stored on disk but KT stores them as numbered files in numbered directories sans extension or folder information.

Long story short I spent some time poking through the database to identify the tables which would contain the correct metadata which would allow me to rebuild the tree using a proper filesystem. For record the tables required are the following,

  • folders – Contains the folder tree. Each entry represents a folder and contains its parent folder id.
  • documents – Contains the documents that each folder contains. Knowing the folders id you can determine what documents live in that folder.
  • document_content_version – Contains the metadata required to get the actual file from disk. A 1 to 1 mapping between document id and this table is all that is required.

That said here is a short Python script which can be used to rebuild the folders and documents on disk. All that is required is to ensure that Python MySQLdb is installed and to set the database details. Depending on your KT install you may need to change the document location. Where  the script is run it will replicate the folder tree containing the documents preserving the structures, names and extensions.

Keep in mind this is a fairly ugly script abusing global variables and such. It is also not incredibly efficient, but did manage to extract 20GB of files in my case in a little under 10 minutes.

import MySQLdb
import os
import shutil

# KnowledgeTree default place to store documents
ktdocument = '/var/www/ktdms/Documents/'

conn = MySQLdb.connect(user='', passwd='',db='', charset="utf8", use_unicode=True)
cursor = conn.cursor()

# global variables FTW
cursor.execute('''select id, parent_id, name from folders;''')
allfolders = cursor.fetchall()

cursor.execute('''select id, folder_id from documents;''')
alldocuments = cursor.fetchall()

cursor.execute('''select document_id, filename, storage_path from document_content_version;''')
document_locations = cursor.fetchall()


# create folder tree which matches whatever the database suggests exists
def create_folder_tree(parent_id, path):
    directories = [x for x in allfolders if x[1] == parent_id]
    for directory in directories:
        d = '.%s/%s/' % (path, directory[2])
        
        print d
        os.makedirs(d)
        # get all the files that belong in this directory
        for document in [x for x in alldocuments if x[1] == directory[0]]:
            try:
                location = [x for x in document_locations if document[0] == x[0]][0]
                print 'copy %s%s %s%s' % (ktdocument, location[2], d, location[1])
                shutil.copy2('%s%s' % (ktdocument, location[2]), '%s%s' % (d, location[1]))
            except:
                 print 'ERROR exporting - Usually due to a linked document.'

        create_folder_tree(parent_id=directory[0], path='%s/%s' % (path, directory[2]))

create_folder_tree(parent_id=1, path='')

C# XML Cleaner Regex

One of the most annoying things I deal with is XML documents with invalid characters inside them. Usually caused by copy pasting from MS Word it ends up with invisible characters that you cannot easily find and cause XML parsers to choke. I have encountered this problem enough that I thought a quick blog post would be worth the effort.

As such here mostly for my own reference is a regular expression for C# .NET that will clean invalid XML characters from any XML file.

const string InvalidXmlChars = @"[^\x09\x0A\x0D\x20-\xD7FF\xE000-\xFFFD\x10000-x10FFFF]";

Simply take the XML document as a string and do a simple regular expression replace over it to remove the characters. You can then import into whatever XML document structure you want and process it as normal.

For note the reason I encountered this was when I was building a series of WSDL web-services over HR data. Since the data had multiple sources merged during an ETL process and some of them were literally CSV files I hit this issue a lot. Attempts to sanitize the data failed as it was overwritten every few hours and it involved multiple teams to change the source generation. In the end I just ran the cleaner over every field before it was returned and everything worked perfectly.

Regular Expressions are Fast, Until they Aren’t

TL/DR: Regular expressions are fast, until they aren’t. How I got a 20x performance by switching to string functions.

With the new version of searchcode.com one of the main things I wanted to address was performance. The previous version had all sorts of performance issues which were not limited to the usual suspects such as the database or search index.

When developing the new version one of the tasks listed in my queue was to profile search operations for anything slowing things down. Sadly I have since lost the profile output but observed that one of the main speed culprits is the format_results function inside the code model. For most queries I tried while it was the slowest operation it wasn’t worth optimising simply because its impact was so low. I did however keep it in the back of my mind that if there were any performance issues it would be the first thing to look at.

The final step in any profiling however is to load some production data and run a load test. My personal preference being to generate a heap of known URL’s and tell a tool like Siege to go crazy with them. The results showed that 95% of pages loaded very quickly, but some took over 30 seconds. These instantly prompted further investigation.

A quick look at the profiler showed that the “back of mind function” was now becoming a serious issue. It seemed that for some regular expressions over some data types the worst case time was very bad indeed.

All of a sudden that function has become a major bottleneck.This needed to be resolved, to fix the worst case without selling out the best case which was very fast indeed. To get an understanding of what the function does you need to understand how searchcode works. When you search for a function of snippet searchcode tries to search for something that matches exactly what you are looking for first, and something containing anything in your query second. This means you end up with two match types, exact matches and fuzzy matches. The results are then processed by firstly trying to match the query exactly, and then going for a looser match.

This was implemented initially though two regular expressions like the below,

exact_match = re.compile(re.escape(search_term), re.IGNORECASE)
loose_match = re.compile('|'.join([re.escape(x) for x in search_term.split(' ')], re.IGNORECASE)

As you can see they are compiled before being handed off to another function which uses them for the actual matching. These are fairly simple regular expressions with the first just looking for any match and the second a large OR match. Certainly you would not expect them to cause any performance issues. Reading the following on stack overflow regarding the differences https://stackoverflow.com/questions/4901523/whats-a-faster-operation-re-match-search-or-str-find certainly seems to suggest that unless you are doing thousands of matches the performance should be negligible.

At heart searchcode is just a big string matching engine, and it does many thousands or hundreds of thousands of match operations for even a simple search. Since I don’t actually use of the power of regular expressions the fix is to change the code so that we pass in an array of terms to search for and use a simple Python in operator check.

exact_match = [search_term.lower()]
loose_match = [s.lower().strip()
                for s in search_term.split(' ')]

The results? Well remember we want to improve the worst case without selling out the best case, but the end result was pages that were taking nearly a minute to return were coming back in less than a second. All other queries seemed to come back either in the same time or faster.

So it turns our regular expressions are fast most of the time. Certainly I have never experienced and performance issues with them up till now. However at edge cases like the one in searchcode you can hit a wall and its at that point you need to profile and really re-think your approach.

Implementing C# Linq Distinct on Custom Object List

Ever wanted to implement a distinct over a custom object list in C# before? You quickly discover that it fails to work. Sadly there is a lack of decent documentation about this and a lot of FUD. Since I lost a bit of time hopefully this blog post can be picked up as the answer.

Thankfully its not as difficult as you would image. Assuming you have a simple custom object which contains an Id, and you want to use that Id to get a distinct list all you need to do is add the following to the object.

public override bool Equals(object obj)
{
	return this.Id == ((CustomObject)obj).Id;
}

public override int GetHashCode()
{
	return this.Id.GetHashCode();
}

You need both due to the way that Linq works. I suspect under the hood its using a hash to work out whats the same hence GetHashCode.

Bitcoin Clones use Same Network?

Another comment I posted over on the TechZing Podcast. It was addressing Justin’s comment about bitcoin clones using the same “network” which is true, in that they share the same protocol but each have their own blockchain.

Each of the “bitcoin” clones are actually their own network. As far as I am aware they have no communication between each network in any form. Its also why each one’s blockchain is so different in size. Also the difference between bitcoin and litecoin (and its clones, such as dogecoin) is the proof of work algorithm they use to verify transactions. Bitcoin uses SHA256 (hence you are seeing lots of ASIC devices) whereas litecoin uses Scrypt, which is more ASIC resistant (although ASIC is starting to come out for them as well).

Most of the coins fall into those two groups, either SHA256 or Scrypt. Two coins that I know of that are slightly different are Primecoin and Vertcoin. Primecoin calculates primes as its proof of work algorithm, so its output is vaguely useful to anyone studying prime numbers. Its also the only coin that I am aware of that can only be mined by CPU. This makes it popular to run on botnets and spot instances in the cloud as you don’t need a GPU. Vertcoin by difference uses Scrypt, but a modified version which is supposed to be very resistant to ASIC mining, presumably by using even more memory then Scrypt.

I think both of you would be wise to actually have a look at dogecoin. The community has gotten more traction then litecoin has in 2 months and is catching up to bitcoin at a staggering rate. Once you get past the meme (which makes it easier to get into I guess?) there is a lot to like and its certainly gaining a lot of a adoption. Lastly its about to have its first block rate halving soon, so now is probably a good chance to pick some up before the price doubles again.

It sounds crazy, but the price is going nuts right now. Its the 3rd highest martketcap coin now and the reward is going to drop in 3 days so expect it to go up again.

http://tuxedage.wordpress.com/2014/02/06/a-serious-analysis-of-dogecoin-or-why-I-am-all–in-on-dogecoin/

I highly suggest reading the above. I don’t agree with it all but mostly it seems right to me. Dogecoin has the potential to be the new litecoin and possibly the new bitcoin. Especially with all of the activity taking place.

Be sure to have a look at http://reddit.com/r/dogecoin/ as well. The community is VERY active, enthusiastic and generous. They are spending the coins making doge more of a currency and less a value store.

Python pep8 git commit check

Without adding a git commit hook I wanted to be able to check if my Python code conformed to pep8 standards before committing anything. Since I found the command reasonably useful I thought I would post it here.

git status -s -u | grep '\.py$' | awk '{split($0,a," "); print a[2]}' | xargs pep8

Just run the above in your projects directory. It’s fairly simple but quite effective at ensuring your Python code becomes cleaner ever time you commit to the repository. The nice thing about it is that it only checks files you have modified, allowing you to slowly clean up existing code bases.

Why you should never ask permission to clean up code

This is something that took me 2 years or so to learn. One day I realised nobody was really looking at my timecards in depth so I started allocating extra time to things and using the extra time to fix the things I thought needed fixing. Once I started delivering on this I showed my manager who agreed that it was a good use of time. I was given free reign to fix anything I felt would add maximum value, provided the bug fixes continued to be delivered without any major compromise.

Since that time I have re-factored quite a few code-bases; added unit tests, fixed some build processes, improved performance and generally feel happier at work for getting things done that are important to me.

Don’t get stuck in constant bug fix mode. If you cant get approval to fix things then change jobs because bug fix after bug fix is depressing and will bring you down.