Rap Genius is Back on Google by Genius Founders Lyrics
It takes a few days for things to return to normal, but we’re officially back!
First of all, we owe a big thanks to Google for being fair and transparent and allowing us back onto their results pages. We overstepped, and we deserved to get smacked.
This post has 2 parts:The history of our link-building strategy: how it started, evolved, and eventually spun out of control.The story of how we approached the problem of identifying and removing problematic links from the Internet and got back on Google.The Story of the Rap Genius Blog FamilyIn the beginning (back in 2009, when it was just a few friends), we’d email sites like Nah Right, HipHopDX, Rap Radar, and NYMag and beg them to feature stuff like well-annotated songs, the Rap Map, etc. In this, we had minimal success.
Over the next couple years, the site and the community of annotators grew. Many of our contributors had music blogs, and some would link to Rap Genius pages when mentioning tracks in their posts. Our social media presence grew, and we made friends on Twitter with a handful of music bloggers.
So at this point, our blog family was just a series of blogs, many of which belonged to site contributors, whose content and tone aligned well with Rap Genius. They linked our song pages in relevant posts, and we helped promote their blogs. The spirit of this was broader than a quid-pro-quo link exchange – the bloggers linked to us because their readers were interested in our lyrics and annotations, not just because they wanted to help us. We plugged them because they had good, tweet-worthy blog posts, not just because we appreciated their help with promotion.
As we grew, our blog family began to get fancy. We began collaborations with major publications – including TheGrio, Esquire, Huffington Post, and The Atlantic – where they would ask us to write reviews and op-ed pieces on their sites. This was fun because it was a chance to promote Rap Genius to bigger audiences, and, since the publications liked it when we linked to interesting Rap Genius content (top lines, top songs, etc), it also helped our search rankings.
Most of the time the links from our blog family and these guest posts were organically woven into the text. For example (with Rap Genius links highlighted):Other times we would link an album’s entire tracklist at the bottom of a post, and we encouraged other bloggers to do the same. Eventually we made it very easy for bloggers to grab the links for a whole album by adding an “embed” button to our album pages. This produced posts that look like this:This is a blog post about Reasonable Doubt, with the full tracklist linked below. This definitely looks less natural, but at the time we didn’t think we were breaking Google’s rules because:The links were to relevant pages with good content.The bloggers were into the links and their readers liked Rap Genius.We thought Google penalized you a bit for backlinks that are clumped like this. I.e., we thought posts like this were worse for SEO than those with links woven into the text, but still not against the rules.These links’ anchor text (“Artist name – Song name lyrics”) was optimized to be sure, but they were also accurate descriptions of the linked content.This takes us up to the past couple months, when we did two things that were more or less totally debauched:On guest posts, we appended lists of song links (often tracklists of popular new albums) that were sometimes completely unrelated to the music that was the subject of the post.We offered to promote any blog whose owner linked to an album on Rap Genius in any post regardless of its content. This practice led to posts like this:
This last one triggered the controversy that caused Google to blacklist us. It started when John Marbach wrote to Mahbod to ask him about the details of the “Rap Genius blog affiliate program” (a recent Mahbod coinage):Mahbod wrote back, and without asking what kind of blog John had or anything about the content of the post he intended to write, gave him the HTML of the tracklist of Bieber’s new album and asked him to link it. In return, he offered to tweet exactly what John wanted and promised “MASSIVE traffic” to his site.
The dubious-sounding “Rap Genius blog affiliate program”, the self-parodic used car salesman tone of the email to John, the lack of any discretion in the targeting of a partner – this all looked really bad. And it was really bad: a lazy and likely ineffective “strategy”, so over-the-top in its obviousness that it was practically begging for a response from Google.
When Matt Cutts chimed in on the HackerNews thread, action from Google seemed inevitable. Sure enough, we woke up on Christmas morning to a “manual action” from Google, which bumped Rap Genius to the sixth page of search results for even queries like [rap genius].How did we get back on Google?Google’s manual action had the reason “Unnatural links to your site”, which they explain as “a pattern of unnatural artificial, deceptive, or manipulative links pointing to your site.” Google recommends a 4-step approach to fixing the problem:Download a list of links to your site from Webmaster Tools.Check this list for any links that violate our guidelines on linking.For any links that violate our guidelines, contact the webmaster of that site and ask that they either remove the links or prevent them from passing PageRank, such as by adding a rel="nofollow" attribute.Use the Disavow links tool in Webmaster Tools to disavow any links you were unable to get removed.So that morning we dug in.
First we assembled the biggest list of inbound Rap Genius links we could get our hands on. We did this by combining the list of “links to your site” from Webmaster Tools that Google recommends with the list of inbound links you can download from Moz link search tool Open Site Explorer. After some cleaning and de-duping, we ended up with a master list of 177,781 URLs that contained inbound links to Rap Genius.
Now we had to find out which of these URLs contained Rap Genius links that Google considered unnatural. The obvious place to start was the URLs associated with publications that we had promoted via Twitter or otherwise had relationships with. So we compiled a list of 100 “potentially problematic domains” and filtered the master list of 178k URLs that contained Rap Genius links to a new list of 3,333 URLs that contained inbound Rap Genius links and were hosted on one of our “potentially problematic domains"
Next we manually examined each of these 3,333 URLs and categorized each one based on whether it contained unnatural links.
Here are the groupings and counts we came up with:Group 1: Contains no links or already counted in another group. These we discarded. (1,791 pages)Group 2: Contains links organically woven into the text (1,294 pages)Group 3: Contains relevant structured link lists (169 pages)Group 4: Contains irrelevant structured link lists (129 pages)The URLs in Group 4 obviously had to go, but we decided to remove the links on the Group 3 URLs as well just to be safe. So we started emailing bloggers and asking them to take down links.
This was a good start, but we wanted to catch everything. There had to be a better way! Enter the scraper.The ScraperTo be completely thorough, we needed a way to examine every single URL in the master list of 178k linking URLs for evidence of unnatural links so we could get them removed.
So we wrote a scraper to download each of the 178k URLs, parse the HTML and rank them by “suspiciousness”, which was calculated from:The total number of Rap Genius song page links the URL contained (more links is more suspicious)How many of those links contained “ Lyrics” (standardized anchor text is more suspicious)Whether the links were in a “clump” (i.e., 2 or more links separated by either nothing or whitespace)Then we visited and categorized the most suspicious URLs by hand.
Calculating suspiciousness for an individual URL is relatively straightforward with Nokogiri, but downloading all the URLs is more challenging. How did we do it?Technical Digression: How to scrape 178k URLs in Ruby in less than 15 minutesOk, so you have 178k URLs in a postgres database table. You need to scrape and analyze all of them and write the analysis back to the database. Then, once everything’s done, generate a CSV from the scraped data.The naive approachOpen-uri is probably the easiest way to download a URL in Ruby:require 'open-uri'
urls.each do |url|
analyze_response(open(url).read)
end
But downloading and analyzing 178k URLs one at a time would take days – how do we make it faster?ConcurrencyTo make this faster we need a way to download multiple URLs simultaneously: Ruby threads. So let’s create a bunch of threads to simultaneously grab URLs from a queue and download them. Something like this:require 'open-uri'
require 'thread'
queue, processed = Queue.new, Queue.new
urls.each { |u| queue << u }
concurrency = 200
concurrency.times do
Thread.new { processed << open(queue.pop).read until queue.empty? }
end
urls.length.times { analyze_response(processed.pop) }But this is verbose and writing concurrent code can be quite confusing. A better idea is to use Typhoeus, which abstracts away the thread handling logic behind a simple, callback-based API. Here’s the functionality of the code above implemented using Typhoeus:hydra = Typhoeus::Hydra.new(max_concurrency: 200)
urls.each do |url|
hydra.queue(request = Typhoeus::Request.new(url))
request.on_complete do |response|
analyze_response(response.body)
end
end
hydra.run
Now we can download 200 pages at once – nice! But even at this rate it still takes over 3 hours to scrape all 178k URLs. Can we make it faster?Even More ConcurrencyThe naive way to make this faster is to turn up Typhoeus’s concurrency level past 200. But as Typhoeus’s documentation warns, doing this causes things to “get flakey” – your program can only do so many things at once before it runs out of memory.
Also, though turning up Typhoeus’s concurrency would increase parallelization in downloading the URLs (since this is IO-bound), the processing of each response is CPU-bound and therefore cannot be effectively parallelized within a single MRI Ruby process.
So to achieve more parallelism we need more memory and CPU power – what if we could get 100 machines each downloading 200 URLs at a time by running their own version of the program?
Sounds like the perfect job for Amazon EC2. But configuring, spinning up, and coordinating a bunch of EC2 nodes is annoying and you have to write a bunch of boilerplate code to get it going. If only there were a way to abstract away the underlying virtual machines and instead only think about executing 100 simultaneous processes. Fortunately Heroku does exactly this!
People think of Heroku as a platform for web applications, but it’s pretty easy to hack it to run a work queue application like ours. All you have to do is put your app into maintenance mode, scale your web dynos to 0, edit your Procfile, and spin up some worker processes. So we added the smallest database that supports 100 simultaneous connections, spun up 100 workers, and started scraping.
Now we had 100 instances of the program running simultaneously, each of which was scraping 200 URLs at a time for a 4 order of magnitude improvement over our naive solution. But..How do we prevent workers from tripping over one another?This latest solution is highly performant, but that performance doesn’t come for free – we have to add additional logic to make the workers to work together. Specifically, we must ensure that every worker grabs its own URLs to scrape and that no 2 workers ever end up scraping the same URLs at the same time.
There are a number of approaches to keeping the workers out of each other’s way, but for simplicity we decided to do it all in the database. The idea is simple: when a worker wants to grab a new set of 200 URLs to scrape, it performs an update on the URLs table to lock each row it selects. Since each worker only tries to grab unlocked URL rows, no 2 workers will ever grab the same URL.
We decided to implement this by borrowing some of the locking ideas from Delayed Job ActiveRecord:scope :not_locked, -> { where(locked_at: nil) }
scope :unscraped, -> { where(fetched: false).not_locked }
def Url.reserve_batch_for_scraping(limit)
urls_subquery = unscraped.limit(limit).order(:id).select(:id).lock(true).to_sql
db_time_now = connection.quote(Time.now.utc.to_s(:db))
find_by_sql <<-SQL
UPDATE urls SET locked_at = #{db_time_now}, updated_at = #{db_time_now}
WHERE id IN (#{urls_subquery})
RETURNING *
SQL
end
And that’s it.
Ok fine, that’s not it – you still have to write logic to unlock URL records when a worker locks them but somehow dies before finishing scraping them. But after that you’re done!The ResultsWith our final approach – using 100 workers each of which scraped 200 URLs at a time – it took less than 15 minutes to scrape all 178k URLs. Not bad.
Some numbers from the final run of scraping/parsing:Total pages fetched: 177,755Total pages scraped successfully: 124,286Total scrape failures: 53,469Time outs (20s): 15,703Pages that no longer exist (404): 13,884Other notable error codes (code: count):403: 12,305522: 2,951503: 2,759520: 2,159500: 1,896If you’re curious and want to learn more, check out the code on GitHubEnd of technical digressionThe scraper produced great results – we discovered 590 more pages with structured linked lists and asked the relevant webmasters to remove the links.
However, the vast majority of the URLs the scraper uncovered were fundamentally different from the pages we had seen before. They contained structured lists of Rap Genius links, but were part of spammy aggregator / scraping sites that had scraped (sometimes en masse) the posts of the sites with whom we had relationships (and posts from Rap Genius itself).
Generally Google doesn’t hold you responsible for unnatural inbound links outside of your control, so under normal circumstances we’d expect Google to simply ignore these spammy pages.
But in this case, we had screwed up and wanted to be as proactive as possible. So we wrote another script to scrape WHOIS listings and pull down as many contact email addresses for these aggregation/scraping operations that we could find. We asked all the webmasters whose contact information we could find to remove the links.Disavowing links we couldn’t get removedWe were very successful in getting webmasters we knew to remove unnatural links. Most of these folks were friends of the site and were sad to see us disappear from Google.
All in all, of the 286 potentially problematic URLs that we manually identified, 217 (more than 75 percent!) have already had all unnatural links purged.
Unsurprisingly we did not have as much success in getting the unnatural links flagged by our scraper removed. Most of these sites are super-spammy aggregators/scrapers that we can only assume have Dr. Robotnik-esque webmasters without access to email or human emotion. Since we couldn’t reach them in their mountain lairs, we had no choice but to disavow these URLs (and all remaining URLs containing unnatural links from our first batch) using Google’s disavowal tool.ConclusionWe hope this gives some insight into how Rap Genius did SEO and what went on behind the scenes while we were exiled from Google. Also, if you find your website removed from Google, we hope you find our process and tools helpful for getting back.
To Google and our fans: we’re sorry for being such morons. We regret our foray into irrelevant unnatural linking. We’re focused on building the best site in the world for understanding lyrics, poetry, and prose and watching it naturally rise to the top of the search results.
Though Google is an extremely important part of helping people discover and navigate Rap Genius, we hope that this ordeal will make fans see that Rap Genius is more than a Google-access-only website. The only way to fully appreciate and benefit from Rap Genius is to sign up for an account and use Rap Genius – not as a substitute for Wikipedia or lyrics sites, but as a social network where curious and intelligent people gather to socialize and engage in close reading of text.
Much love. iOS app dropping next week!
— Tom, Ilan, and Mahbod
First of all, we owe a big thanks to Google for being fair and transparent and allowing us back onto their results pages. We overstepped, and we deserved to get smacked.
This post has 2 parts:The history of our link-building strategy: how it started, evolved, and eventually spun out of control.The story of how we approached the problem of identifying and removing problematic links from the Internet and got back on Google.The Story of the Rap Genius Blog FamilyIn the beginning (back in 2009, when it was just a few friends), we’d email sites like Nah Right, HipHopDX, Rap Radar, and NYMag and beg them to feature stuff like well-annotated songs, the Rap Map, etc. In this, we had minimal success.
Over the next couple years, the site and the community of annotators grew. Many of our contributors had music blogs, and some would link to Rap Genius pages when mentioning tracks in their posts. Our social media presence grew, and we made friends on Twitter with a handful of music bloggers.
So at this point, our blog family was just a series of blogs, many of which belonged to site contributors, whose content and tone aligned well with Rap Genius. They linked our song pages in relevant posts, and we helped promote their blogs. The spirit of this was broader than a quid-pro-quo link exchange – the bloggers linked to us because their readers were interested in our lyrics and annotations, not just because they wanted to help us. We plugged them because they had good, tweet-worthy blog posts, not just because we appreciated their help with promotion.
As we grew, our blog family began to get fancy. We began collaborations with major publications – including TheGrio, Esquire, Huffington Post, and The Atlantic – where they would ask us to write reviews and op-ed pieces on their sites. This was fun because it was a chance to promote Rap Genius to bigger audiences, and, since the publications liked it when we linked to interesting Rap Genius content (top lines, top songs, etc), it also helped our search rankings.
Most of the time the links from our blog family and these guest posts were organically woven into the text. For example (with Rap Genius links highlighted):Other times we would link an album’s entire tracklist at the bottom of a post, and we encouraged other bloggers to do the same. Eventually we made it very easy for bloggers to grab the links for a whole album by adding an “embed” button to our album pages. This produced posts that look like this:This is a blog post about Reasonable Doubt, with the full tracklist linked below. This definitely looks less natural, but at the time we didn’t think we were breaking Google’s rules because:The links were to relevant pages with good content.The bloggers were into the links and their readers liked Rap Genius.We thought Google penalized you a bit for backlinks that are clumped like this. I.e., we thought posts like this were worse for SEO than those with links woven into the text, but still not against the rules.These links’ anchor text (“Artist name – Song name lyrics”) was optimized to be sure, but they were also accurate descriptions of the linked content.This takes us up to the past couple months, when we did two things that were more or less totally debauched:On guest posts, we appended lists of song links (often tracklists of popular new albums) that were sometimes completely unrelated to the music that was the subject of the post.We offered to promote any blog whose owner linked to an album on Rap Genius in any post regardless of its content. This practice led to posts like this:
This last one triggered the controversy that caused Google to blacklist us. It started when John Marbach wrote to Mahbod to ask him about the details of the “Rap Genius blog affiliate program” (a recent Mahbod coinage):Mahbod wrote back, and without asking what kind of blog John had or anything about the content of the post he intended to write, gave him the HTML of the tracklist of Bieber’s new album and asked him to link it. In return, he offered to tweet exactly what John wanted and promised “MASSIVE traffic” to his site.
The dubious-sounding “Rap Genius blog affiliate program”, the self-parodic used car salesman tone of the email to John, the lack of any discretion in the targeting of a partner – this all looked really bad. And it was really bad: a lazy and likely ineffective “strategy”, so over-the-top in its obviousness that it was practically begging for a response from Google.
When Matt Cutts chimed in on the HackerNews thread, action from Google seemed inevitable. Sure enough, we woke up on Christmas morning to a “manual action” from Google, which bumped Rap Genius to the sixth page of search results for even queries like [rap genius].How did we get back on Google?Google’s manual action had the reason “Unnatural links to your site”, which they explain as “a pattern of unnatural artificial, deceptive, or manipulative links pointing to your site.” Google recommends a 4-step approach to fixing the problem:Download a list of links to your site from Webmaster Tools.Check this list for any links that violate our guidelines on linking.For any links that violate our guidelines, contact the webmaster of that site and ask that they either remove the links or prevent them from passing PageRank, such as by adding a rel="nofollow" attribute.Use the Disavow links tool in Webmaster Tools to disavow any links you were unable to get removed.So that morning we dug in.
First we assembled the biggest list of inbound Rap Genius links we could get our hands on. We did this by combining the list of “links to your site” from Webmaster Tools that Google recommends with the list of inbound links you can download from Moz link search tool Open Site Explorer. After some cleaning and de-duping, we ended up with a master list of 177,781 URLs that contained inbound links to Rap Genius.
Now we had to find out which of these URLs contained Rap Genius links that Google considered unnatural. The obvious place to start was the URLs associated with publications that we had promoted via Twitter or otherwise had relationships with. So we compiled a list of 100 “potentially problematic domains” and filtered the master list of 178k URLs that contained Rap Genius links to a new list of 3,333 URLs that contained inbound Rap Genius links and were hosted on one of our “potentially problematic domains"
Next we manually examined each of these 3,333 URLs and categorized each one based on whether it contained unnatural links.
Here are the groupings and counts we came up with:Group 1: Contains no links or already counted in another group. These we discarded. (1,791 pages)Group 2: Contains links organically woven into the text (1,294 pages)Group 3: Contains relevant structured link lists (169 pages)Group 4: Contains irrelevant structured link lists (129 pages)The URLs in Group 4 obviously had to go, but we decided to remove the links on the Group 3 URLs as well just to be safe. So we started emailing bloggers and asking them to take down links.
This was a good start, but we wanted to catch everything. There had to be a better way! Enter the scraper.The ScraperTo be completely thorough, we needed a way to examine every single URL in the master list of 178k linking URLs for evidence of unnatural links so we could get them removed.
So we wrote a scraper to download each of the 178k URLs, parse the HTML and rank them by “suspiciousness”, which was calculated from:The total number of Rap Genius song page links the URL contained (more links is more suspicious)How many of those links contained “ Lyrics” (standardized anchor text is more suspicious)Whether the links were in a “clump” (i.e., 2 or more links separated by either nothing or whitespace)Then we visited and categorized the most suspicious URLs by hand.
Calculating suspiciousness for an individual URL is relatively straightforward with Nokogiri, but downloading all the URLs is more challenging. How did we do it?Technical Digression: How to scrape 178k URLs in Ruby in less than 15 minutesOk, so you have 178k URLs in a postgres database table. You need to scrape and analyze all of them and write the analysis back to the database. Then, once everything’s done, generate a CSV from the scraped data.The naive approachOpen-uri is probably the easiest way to download a URL in Ruby:require 'open-uri'
urls.each do |url|
analyze_response(open(url).read)
end
But downloading and analyzing 178k URLs one at a time would take days – how do we make it faster?ConcurrencyTo make this faster we need a way to download multiple URLs simultaneously: Ruby threads. So let’s create a bunch of threads to simultaneously grab URLs from a queue and download them. Something like this:require 'open-uri'
require 'thread'
queue, processed = Queue.new, Queue.new
urls.each { |u| queue << u }
concurrency = 200
concurrency.times do
Thread.new { processed << open(queue.pop).read until queue.empty? }
end
urls.length.times { analyze_response(processed.pop) }But this is verbose and writing concurrent code can be quite confusing. A better idea is to use Typhoeus, which abstracts away the thread handling logic behind a simple, callback-based API. Here’s the functionality of the code above implemented using Typhoeus:hydra = Typhoeus::Hydra.new(max_concurrency: 200)
urls.each do |url|
hydra.queue(request = Typhoeus::Request.new(url))
request.on_complete do |response|
analyze_response(response.body)
end
end
hydra.run
Now we can download 200 pages at once – nice! But even at this rate it still takes over 3 hours to scrape all 178k URLs. Can we make it faster?Even More ConcurrencyThe naive way to make this faster is to turn up Typhoeus’s concurrency level past 200. But as Typhoeus’s documentation warns, doing this causes things to “get flakey” – your program can only do so many things at once before it runs out of memory.
Also, though turning up Typhoeus’s concurrency would increase parallelization in downloading the URLs (since this is IO-bound), the processing of each response is CPU-bound and therefore cannot be effectively parallelized within a single MRI Ruby process.
So to achieve more parallelism we need more memory and CPU power – what if we could get 100 machines each downloading 200 URLs at a time by running their own version of the program?
Sounds like the perfect job for Amazon EC2. But configuring, spinning up, and coordinating a bunch of EC2 nodes is annoying and you have to write a bunch of boilerplate code to get it going. If only there were a way to abstract away the underlying virtual machines and instead only think about executing 100 simultaneous processes. Fortunately Heroku does exactly this!
People think of Heroku as a platform for web applications, but it’s pretty easy to hack it to run a work queue application like ours. All you have to do is put your app into maintenance mode, scale your web dynos to 0, edit your Procfile, and spin up some worker processes. So we added the smallest database that supports 100 simultaneous connections, spun up 100 workers, and started scraping.
Now we had 100 instances of the program running simultaneously, each of which was scraping 200 URLs at a time for a 4 order of magnitude improvement over our naive solution. But..How do we prevent workers from tripping over one another?This latest solution is highly performant, but that performance doesn’t come for free – we have to add additional logic to make the workers to work together. Specifically, we must ensure that every worker grabs its own URLs to scrape and that no 2 workers ever end up scraping the same URLs at the same time.
There are a number of approaches to keeping the workers out of each other’s way, but for simplicity we decided to do it all in the database. The idea is simple: when a worker wants to grab a new set of 200 URLs to scrape, it performs an update on the URLs table to lock each row it selects. Since each worker only tries to grab unlocked URL rows, no 2 workers will ever grab the same URL.
We decided to implement this by borrowing some of the locking ideas from Delayed Job ActiveRecord:scope :not_locked, -> { where(locked_at: nil) }
scope :unscraped, -> { where(fetched: false).not_locked }
def Url.reserve_batch_for_scraping(limit)
urls_subquery = unscraped.limit(limit).order(:id).select(:id).lock(true).to_sql
db_time_now = connection.quote(Time.now.utc.to_s(:db))
find_by_sql <<-SQL
UPDATE urls SET locked_at = #{db_time_now}, updated_at = #{db_time_now}
WHERE id IN (#{urls_subquery})
RETURNING *
SQL
end
And that’s it.
Ok fine, that’s not it – you still have to write logic to unlock URL records when a worker locks them but somehow dies before finishing scraping them. But after that you’re done!The ResultsWith our final approach – using 100 workers each of which scraped 200 URLs at a time – it took less than 15 minutes to scrape all 178k URLs. Not bad.
Some numbers from the final run of scraping/parsing:Total pages fetched: 177,755Total pages scraped successfully: 124,286Total scrape failures: 53,469Time outs (20s): 15,703Pages that no longer exist (404): 13,884Other notable error codes (code: count):403: 12,305522: 2,951503: 2,759520: 2,159500: 1,896If you’re curious and want to learn more, check out the code on GitHubEnd of technical digressionThe scraper produced great results – we discovered 590 more pages with structured linked lists and asked the relevant webmasters to remove the links.
However, the vast majority of the URLs the scraper uncovered were fundamentally different from the pages we had seen before. They contained structured lists of Rap Genius links, but were part of spammy aggregator / scraping sites that had scraped (sometimes en masse) the posts of the sites with whom we had relationships (and posts from Rap Genius itself).
Generally Google doesn’t hold you responsible for unnatural inbound links outside of your control, so under normal circumstances we’d expect Google to simply ignore these spammy pages.
But in this case, we had screwed up and wanted to be as proactive as possible. So we wrote another script to scrape WHOIS listings and pull down as many contact email addresses for these aggregation/scraping operations that we could find. We asked all the webmasters whose contact information we could find to remove the links.Disavowing links we couldn’t get removedWe were very successful in getting webmasters we knew to remove unnatural links. Most of these folks were friends of the site and were sad to see us disappear from Google.
All in all, of the 286 potentially problematic URLs that we manually identified, 217 (more than 75 percent!) have already had all unnatural links purged.
Unsurprisingly we did not have as much success in getting the unnatural links flagged by our scraper removed. Most of these sites are super-spammy aggregators/scrapers that we can only assume have Dr. Robotnik-esque webmasters without access to email or human emotion. Since we couldn’t reach them in their mountain lairs, we had no choice but to disavow these URLs (and all remaining URLs containing unnatural links from our first batch) using Google’s disavowal tool.ConclusionWe hope this gives some insight into how Rap Genius did SEO and what went on behind the scenes while we were exiled from Google. Also, if you find your website removed from Google, we hope you find our process and tools helpful for getting back.
To Google and our fans: we’re sorry for being such morons. We regret our foray into irrelevant unnatural linking. We’re focused on building the best site in the world for understanding lyrics, poetry, and prose and watching it naturally rise to the top of the search results.
Though Google is an extremely important part of helping people discover and navigate Rap Genius, we hope that this ordeal will make fans see that Rap Genius is more than a Google-access-only website. The only way to fully appreciate and benefit from Rap Genius is to sign up for an account and use Rap Genius – not as a substitute for Wikipedia or lyrics sites, but as a social network where curious and intelligent people gather to socialize and engage in close reading of text.
Much love. iOS app dropping next week!
— Tom, Ilan, and Mahbod