Fixing 7digital’s Music Search, Part one: The Gargantuan Index

I’m Michael Okarimia, and since January 2014 I’ve been in role of Lead Developer of the Content Discovery Team at 7digital. The Content Discovery team recently improved the catalogue search infrastructure for the 7digital API. This is the first of a series of posts which describes how we made 7digital’s music search usable for our users and clients. 

This is a three part series of posts which describes the work we’ve done on 7digital’s search platform.

This post describes the issues 7digital had with their platform and the second post describes how we improved response times for the problem, and then the third and final post covers how we improved the relevancy of the search results

The Problem with Music Search; The state of play in Jan 2014

Searching for a music track in 7digital’s API is often the first point of contact for accessing its catalogue of 27 million tracks. The API that 7digital provide has many publicly accessible web services which are reached via a specific URL, which we refer to as API endpoints. To stream or purchase a track, one first needs to know what it’s 7digital track ID is. 7digital track IDs mean nothing to the average end user, so searching for a track based on its track title and/or artist name is a very common method of retrieving a 7digital track ID. The client consuming the API can then use this 7digital track ID to access a the music streaming service on the streaming API endpoints.

The title, artist, price and release date of a track is referred to as track meta data, which is data pertaining to the track, but not in itself the audio file of the track.

The Track search endpoint, as it’s name suggests, accepts search terms as text, and returns in the it’s search results the metadata of any matching tracks.

This is a track/search response from the 7digital API: http://api.7digital.com/1.2/track/search?q=Happy%20pharrell&oauth_consumer_key=YOUR_KEY_HERE&country=GB&pagesize=2

It was powered by web servers which sent its search requests a SOLR web application. SOLR provides http access to a Lucene search index,  which is a very widely used piece of text search software. Some examples where Lucene is used are for searching for words that exist in a collection or books in a library. Many people will be familiar with a search engine on Twitter or Wikipedia, which uses Lucene to search for specific pieces of text in a massive corpus of text.

The two biggest problems were:

  1. In January 2014 the average ~/track/search response time was 4000 milliseconds
  2. Search results often were wrong, out of date, or would return errors.

In addition to being slow, search results would often contain releases that were no longer available or had been updated.  This occurrence was manifested itself as poor user experience for users on 7digital.com site, who would be greeted by this message:

Release Not Found 7digital error

This used to be a frequent experience for users, caused by inconsistencies between the search data store and the catalogue data store

 

This was due to the platform having a separate search database and a catalogue database, which contained different results for the same content. The search database was updated in a different manner to the catalogue database and they were not always consistent with one another. More on that later.

Why was search so slow?

The meta data of the tracks was stored in a search index that was 660Gb in size, which is extremely large compared with many search indexes. Various tweaks to JVM and memory settings were made but they did not make a permanent improvement for very long.

660 Gb is very large for an Lucene search index. Why was it so big?

The Index size was 660Gb, containing 660,000,000 documents. Each document represented all the meta data for a single track.

It’s slow, so why not improve the hardware it was running?

A copy of the SOLR Index was replicated onto four physical servers, 24 cores each, 128 Gb RAM with 1.9TB storage on SSD discs. The server hardware the SOLR servers ran on were upgraded, initially from smaller virtual machines to larger ones, and then later on physical hardware. This was not cost effective given the response time performance we had!

If 7digital ‘s catalogue contains  27 million tracks, and one document represents a single track, then why were there 660 million documents in the search index?

According to the 7digital API documentation, here’s what a track/search response from the 7digital API looks like: http://api.7digital.com/1.2/track/search?q=Happy%20pharrell&oauth_consumer_key=YOUR_KEY_HERE&country=GB&pagesize=2

In the response there are 29 fields representing a track. Every one of the fields in the API response was stored in the Lucene search index. However a search will only match search terms against fields such as track title or artist name, rather than bit rate or image url.

The structure of the track index was such that the data was denormalised. A track could be available in any of the 44 territories 7digital operate in. There would be one track document for each track, multiplied by the number of territories the track was available in. In order to make a single track searchable in all territories there would be 44 documents added to the index. Almost all the field values in those a track documents for a given track would be identical, apart from fields specific to a territory like the price, currency and release date.

The reason for the huge search index size was that it was being used as a document store as well as search index. It worked accurately as a document store, but poorly as a search index for reasons of performance.

Unlike some other nosql use cases, there were almost always more tracks being added and an hourly basis. and very few being removed; every day sees more music released worldwide.

How were these 27 million tracks turned into a searchable index?

Updates to the index were incremental, minute by minute changes to the catalogue were sent via http to the master SOLR server which turned the data into a single index. This master server did not serve any search queries. That work was down by four other SOLR servers which periodically copied the index from the master SOLR server. These four servers are known as Slaves.

Replication from the master SOLR server to the slaves would take around 30-60 minutes, depending on the volume of changes.

Due to the incremental updates to the Lucene index, the number of track documents increased on hourly basis. Within the Lucene index the maxDocument count exceeded the number of searchable documents by a large margin. In Lucene, maxDocument count is simply the number of documents in the index, but this includes deleted ones too, which will not be returned in a search result. 30% of the 660Gb index contained deleted, non-searchable documents. The deleted documents used disc space and hurt search performance.

On a previous occasion in 2013 running a Lucene optimize command on this Lucene index took over 24 hours and prevented any new tracks to be searchable. Doing this regularly was not possible if the organisation needed updates on an minute by minute basis.

The deleted documents remained in the search index, gradually over time the index was getting bigger. In six months it increased by 160Gb.

So what was the problem with updating the index with new tracks as they were imported into 7digital’s platform, incrementally?

The pain points of a 660 Gb index, updated incrementally were:

  • Data consistency! The delay between a track changes made on the source SQL database and the time it took to propagate into the SOLR slave servers would be at least two hours. There were many times when the change would be effected in the SOLR index, meaning resulting in poor user experience.
  • It was slow to update larger volumes of changes. However, if there were less than a few hundred new tracks to add then updates were fairly speedy (within a few minutes);
  • The index could not fit in memory of it’s servers as it exceeded 128Gb RAM available, which resulted it being read from the hard disc which is an order of magnitude slower;
  • Since there was a unpredictable number of new tracks being added to the master index, a new version of the index was hourly replicated to the slave servers. Each time happened it caused any SOLR caching to be invalidated;
  • Each cache invalidation triggered a Java JVM garbage collection process which was resource intensive and chewed up memory and CPU, resulting in slow response times and SOLR server connection time outs. Many searches would simply fail and return a 503 HTTP error response;
  • Occasionally the index contained erroneous data which would have to manually updated by using a curl command, which was risky;
  • Serving traffic from such a large index required a lot of hardware hungry;
  • It was expensive to host this index in an virtualized environments, for example just one of the slave servers was similar in specification to an i2.8xlarge AWS instance

Why not create the entire index all at once, rather than sending constant updates? Wouldn’t that reduce the index size and help performance?

Yes it would. This was attempted in 2013 by it took around five days to complete, during which this time no new tracks could be added to the search engine. The alternative is was to acquire another large server with a Terrabyte of storage space and index the track data to there, but this still took five days. At the end of that process you would have an index with no deleted documents, yet it would be nearly a week out of date.  This was clearly not acceptable for a fast moving multinational music market.

Now I know why search was so slow. Why were the search results often wrong?

The data returned from the search result was retrieved entirely from an single document in the Lucene index. Almost all of the other non search catalogue API endpoints were querying a different SQL database, which was the original data source for metadata. Given the time it took for the data to be read from the original SQL database, posted into the SOLR master server and then replicated to the slave servers, there would often be a gap in time before the two databases were consistent with each other. Occasionally a update would not make it to the SOLR slaves, and there search API would return stale results which would have to be manually updated.

Sounds like a nightmare. How did you solve these problems?

It was! Now we’ve adequately described the problem, my next post will explain how we both fixed the data consistency problem and also improved search response times by 88%

, , , ,

1 Comment

Using TDD to write Infrastructure Configuration Code for Legacy Servers

There’s much talk about DevOps these days, with its differing interpretations. One aspect is to automate the configuration of your server infrastructure, by writing configuration code that is version controlled and tested. But how can this be applied to existing infrastructure, in particular, poorly maintained legacy systems that are already in production? This challenge was one that the Content Discovery Team at 7digital (which I lead), overcame.

The Challenge

We had inherited a legacy production server with no UAT version and no tests. Every day the system generated mission critical files which were populated from a database running queries that took around eight hours to complete.

There was a requirement to change the application, but doing so immediately would mean a perilous deploys to production without any tests, which I decided was not acceptable. We were only to deploy to production, once we had built a UAT environment and tested the code there, but first step was to get the code running on a developer machine.

Our first challenge was understand what exactly the program was doing. This was made more tricky since it ran out of date versions of ruby, on old versions of Linux.

After some frustrating period attempting to run the current code with it’s ancient ruby gems, we decided that “what the software did” trumped the “how it did it”. A new production server with the latest versions on ruby and gems would ultimately be created, with a matching UAT environment.

Replacing manual configuration with automated configuration

7digital have been moving to single configuration management system and CFEngine was the tool of choice. Rather than manually installing and configuring the new server, we wrote a CFEngine promise file with the bare minimum requirements for the new application, starting with the requirement that the correct version of ruby was installed.

As a team (myself, Dan Kalotay, Dan Watts & Matt Bailey) we were familiar with CFEngine, but not with testing configuration code. We were helped by fellow 7digital developers Sam Crang and James Lewis, who got us up and running with Test Kitchen, and Anna Kennedy whose CFEngine knowledge helped us greatly.

To test the configuration we used Test Kitchen. It uses Vagrant to spin up a new Debian server on the development machine. The Debian image was one of our production server images which had CFEngine agent pre-installed. Using the Vagrant file we slaved it to use the locally edited CFEngine file as its source of configuration, instead of a remote CFEngine hub. Once the virtual machine had been created it would call the Test Kitchen kitchen converge command, which would apply the CFEngine promises defined in the local promise file.

So far, so DevOps. But how does one automatically assert if the promises set the desired state?

Writing Infrastructure Code in a TDD Manner

Using Test Kitchen it is possible to run a suite of tests against the newly created virtual machine. We used Serverspec as it allows us to write Rspec style tests.  Serverspec uses SSH to connect to the virtual machine and then it can run any command to assert that it was configured as desired.

We then started to write tests for the state of the server in a TDD fashion. We would for example:

  1. Write a test to assert that a cron job will run at a certain time,
  2. Run the test and see it fail, since the cron job has not been created yet
  3. Create a CF-Engine promise to create the cron job,
  4. Run kitchen converge && kitchen verify to apply the configuration and run the test again
  5. See the test pass, or if they fail, go back to step 3

In this way we added more configuration, by repeating this  Red-Green-Refactor process, which is familiar to most modern programmers. Running the Serverspec tests allowed us to drive out configuration; accreting functionality of the virtual machine and building up the configuration file that set up the server state.

Deploying to UAT

Once we were happy with the configuration, we committed and pushed the CFEngine promise file up to the CFEngine Policy hub for our UAT environment. It then was straightforward to request that our infrastructure team create the new UAT server slaved to thay same policy hub. Once in UAT we could run more detailed tests overnight, since the SQL queries we were running took around eight hours to complete. Our QA team worked with us to assert that the product worked in accordance with our acceptance criteria.

Deploying to production, now a low-ceremony, low risk event

Once every party was happy with the end result, it was time for our infrastructure team to spin up a new production virtual machine slaved to our CFEngine hub. Within minutes our replacement server was in production,  along with the changes our clients required. Within a few days the old server had been permanently retired.

This was consistent with the idea that organisations should treat their servers as cattle, not pets. In the event of a server failure, spinning up a new virtual machine with little human intervention which works well in cases where the server does not store mission critical data. We had replaced a delicate and temperamental “pet-like” server and replaced it with a more disposable  “cattle-like” one. In case of server problems, prod.server0001 which could be replaced within minutes with an identically configured prod.server0002.

What we learnt along the way

An early test failure we experienced was around installing ruby gems.  We attempted to use CFEngine to run a shell command gem install <gemname> but this always failed. It turned out to be simpler to create it as a deb package and install it, rather than using CFEngine to execute certain shell commands to install gems. This was due to the way CFEngine executes shell commands; the permissions were not appropriate for the application user.

Another problem we overcame was how the cp command on CFEngine not synonymous with Linux cp command.

Conclusion

Whatever your particular interpretation of what DevOps means, in this project I learnt that what allowed us to succeed was close collaboration between developers and infrastructure engineers. The developers learnt much about the infrastructure and infrastructure team members became proficient in understanding how the developer application worked and comfortable with using version control for infrastructure code. Clear communication was key and we all learnt how to get the job done.

We also learnt that sometimes it’s worth starting from scratch rather than attempting to retrofit CFEngine promises to an existing server. The retrofit idea was abandoned due to it being almost as risky as editing configuration files manually on the production server.

In future I’ll always try to get all existing production infrastructure I’m responsible for configured this way, as well as using it for new infrastructure.

No Comments

Using Real Time Application Metrics To Optimise Solr Indexing Throughput

In my previous blog post I wrote about 7digital’s use of real time application monitoring. In this post I will expand upon how the team used application metrics to greatly improve upon a product that powered a critical part of 7digital’s infrastructure.

I’m a member of the Content Discovery Team, who are responsible for making 7digital’s catalogue of 25 million tracks available via the 7digital API.

One of the requirements of making the catalogue accessible is that the lack of a universal unique identifier in music industry that represents artists, tracks and album releases, means that full text searching is important for discovering content. At 7digital, to cope with the increase in both traffic and catalogue size, full text searches were moved over from relational database to Lucene powered Solr database back in 2011

Currently we extract the metadata for the music catalogue from a relational database, transform it to json DTOs and then and post it into our Solr servers, which perform full text searches for music releases, tracks and artists. By placing the catalogue into Solr, this enables it  to be searchable, and makes it possible to retrieve releases and artist metadata when supplying a unique 7digital ID.

The datasets are relatively large; creating an index of the entire dataset of 25 million tracks with the schema used would take many days, so it was necessary to  append the hourly changes of the catalogue to an existing index.

There was a relational database containing many millions of updates to the catalogue in time order. This was the typical amount of meta-data that is generated per day. Each update would be represented as a single row in a “changes” table in the database.

I.e. a row storing metadata for a track  would contain track title, track artist, track price, as some of it’s columns. Each row would have a column denoting if that metadata was being inserted, updated or deleted from the catalogue.

Solr acts as an http wrapper for a Lucene index, so creating an index of the data was done via json posts to the Solr server. One piece of metadata was read from the relational database, mapped into a json document and then posted via HTTP to the Solr server. Solr would then add this document to it’s Lucene index which could be searched. This entire process we refer to as indexing.

The Problem

With the old code base, indexing this data often took hours. This meant that end to end testing in the UAT environment involved lengthy multi hour “code, deploy, run tests” cycles. Typically a change would be made early in the morning so the results could be inspected by lunch time. One could look at the log files to check upon progress, but it became awkward.

The Goal

In early summer 2013 we rewrote the legacy code which performed this Extract, Transform and Load process from the relational database to the Solr server. It was no longer fit for purpose and was causing data inconsistencies which had come about in part from the long feedback cycle between making a code change and seeing the test result.

The goal was improve the metadata indexing throughput to Solr whilst ensuring data correctness.  After writing extensive unit and integration tests to verify data consistency, we then turned to performance tuning in our acceptance tests. We wanted to index as many documents to solr as quickly as possible. Log files were available but it wasn’t straightforward to parse meaning from them when looking for patterns across a time span.

This arrangement was clearly suboptimal so we decided to incorporate real-time monitoring. We already were using StatsD to log and Graphite to visualise the web responses on our internal API endpoints and realised we could put it to use to help us here.

After improving the SQL queries, we discovered that the largest factor of indexing throughput was how many documents to post for each commit.

Each document contains a change which could be a delete, an insert or an update of an existing piece of metadata (i.e. a price change of a track). In our scenario, Lucene treats an update and an insert as the same thing; any document indexed that has a matching ID of an existing one will overwrite the original. Changes could therefore be boiled down to either an delete or an “upsert”.

After posting a certain number of documents, a commit command is sent,  whereby the index commits the recent changes to disk. Once committed, the new documents are in the index and are searchable. While the commit phase is happening, no new documents can be posted to the Solr server. Depending on the size of the index and the number of documents,the commit phase can take many seconds, or even minutes.

Lucene works effectively when only deletes are committing in one batch and only upserts are committed in another batch. This is much faster than interweaving deletes and upserts within the same period between commits. Commits are when the Lucene index is writing the new documents to the index on the disk, and they come with a minimum time to complete, during which posting new documents is not possible.

Number of rows read mapped directly to the number of documents indexed.

Using the StatsD c# client we counted how long it took to read from the relational database, post the documents to Solr, and then how long it took to commit them, and we sent this timings to a statsD server. We then used Graphite to visualise this data.

The graph below shows:

time taken to read the metadata changes denoting deleted tracks/releases from the relational database, within the given range of rows
time taken to post the delete documents to Solr
time taken for Solr to enact the commit (during which period no new documents may be posted)

time taken to read the metadata changes denoting the upserted tracks/release from the relational database within the table range given range of rows
time taken to post the new documents to Solr
time taken to for Solr to enact the commit (which prevents any new documents from being posted)

Chart showing indexing timings

As we can see, commiting updates takes upon 72 seconds and is by far the greatest proportion of time.

Large ranges would make the SQL query component take a long time, occasionally timing out and needing to be repeated. Additionally, commits would take longer.

Using smaller range sizes resulted in more frequent posts and commits to Solr, but commits seemed to have a minimum duration, so there came a point where very small ranges would mean the commit times would be the largest component of the indexing process. and document throughput dropped.

How to measure indexing throughput?

We added some more statsD logging which shows the indexing throughput over the same time period as the chart above.

The below chart shows the row id on the y axis. The each row represents a change of metadata. The blue line shows what is newest change in the relational database. The Green line shows how many of the row ID of the changes that have been posted to Solr. They follow each other with a small delay. A measure of indexing throughput is how closely the green line follows the blue.

Indexing throughput

The right of this graph showed us that there were periods of time when no updates are written into the relation database, but when there are writes, the indexer can index them into Solr in less than two minutes

In the C# code that did the Extract, Transform and Load, we added additional statsD metrics, which posted the max rowID of changes table in the relational database, and what the latest rowID that had been posted to Solr. This graph is still used in production to monitor the application in real time. It lets us  observe when many metadata changes are made, (for instance, when new content is added)  and diagnose any issues with the database we read from, and how long it may take for the content to become available in the Solr servers.

The Result

By being able to monitor how many new changes were being written to the database, and how quickly the indexer can commit it to Solr, we discovered that over a certain rate, increasing throughput would not have a noticeable effect, since there was no benefit in attempting to read more rows per hour than there were being created. This level of monitoring allowed us to effectively decide upon the direction to take when developing the product. The monitoring gave us additional unexpected benefits of having visibility of a business critical database process which was previously unmonitored, the rate of changes recorded was now known.

Ultimately in order to maximise end to end indexing throughout, we found a sweet spot of 100k documents, and by using the the metrics we only optimised where we needed to, thus saving valuable development time.

No Comments