Ultimate Scrapebox Advantage
Ultimate Scrapebox Advantage
Scrapebox
Advantage
Proxies ............................................................................................................................................... 50
Dofollowblogs lists/resources ........................................................................................................... 50
Dofollow directory/search engine .................................................................................................... 50
RSS..................................................................................................................................................... 51
Pinging............................................................................................................................................... 51
Forums .............................................................................................................................................. 51
Web 2.0 Sites ........................................................................................................................................ 51
Footprints Continued ............................................................................................................................ 52
Blogs .................................................................................................................................................. 53
Forums .............................................................................................................................................. 53
Directories ......................................................................................................................................... 59
Ping Mode ......................................................................................................................................... 59
.edu/.gov Blogs ................................................................................................................................. 60
.edu/.gov Forums .............................................................................................................................. 60
Email Harvesting ............................................................................................................................... 61
Comment Footprints ......................................................................................................................... 61
General edu (try these with .gov tld as well) .................................................................................... 62
Legal Stuff/Disclaimer
I suppose I have to get this out of the way.
This publication and all its contents is protected by the US Copyright Act of 1976 and all other
applicable international, federal, state and local laws and all rights are reserved including resale
rights. It is not allowed to give this product away or sell this to guide to anyone else. If you bought or
downloaded this publication from anyone other than Josh M (drummer05) on the backlink forum,
warrior forum, blueprint forum, or www.thescrapeboxmasteradvantage.com (or its partners), then
you have received a pirated copy. Please contact us via email at
[email protected] and notify us of the situation.
Also note that most of this publication is based on personal experience and reliable evidence.
Although I have made every reasonable attempt to reach complete accuracy of the content in this
guide, I assume no responsibility for errors or omissions on the part of the reader. Also you should
use this information as you see fit, and at your own risk. Your particular situation will probably not be
exactly as suited to examples illustrated in this guide, in fact its likely that they will not be the same
and you should adjust your use of the information and recommendations accordingly.
Any trademarks, service marks, product names, or named features are assumed to be the property of
their respective owners and are only used as a reference. There is no implied endorsement if we use
one of these terms.
Finally, think! Use your common sense, nothing in this guide is meant to replace your common sense
or natural trail of thought, medical or other professional advice, and is meant to inform as well as
entertain the reader.
Introduction
Dear Reader,
Firstly, thank you so much for purchasing the Ultimate Scrapebox Advantage. You have decided to
put your trust and faith in me and the methods that are in this guide, and you have made the right
decision. I know that the techniques, methods and ideas that are discussed here will enlighten you
and enrich your use of the Scrapebox tool in every aspect imaginable.
In this guide I will give a short introduction to the basics, what I think you should know, most
common add-ons and their uses, and other resources to help you as well as the merge feature and
proxies.
Then I get into the meat of the course and I start to discuss auto approved blogs. I talk about stealing
your competitors links in various different ways, and give you other ways of building auto approved
blogs too. Then I talk about high pr moderated blogs, and go into some detail with methods there as
well. Next is do-follow blogs, I talk about how to find them and all the information that should come
with finding them; I give some great resources and techniques here which you are going to love.
I talk about getting your comments approved, finding high pr forums, using scrapebox learn feature,
and other techniques like scraping images, plr articles, rss, indexing, and more. In the appendix
section I give all the resources I use, tools, websites, and most importantly, a list of custom
footprints for you to use.
Its probably best for you to read this guide before putting anything into practice. However if you
would like to just skim through the guide and see what techniques you like or using the material in
the appendix section more beneficial, please do what suits you best.
Anyway, its time to get the introduction out of the way and start on the good stuff..... So lets get to
it!
Cheers,
Josh M
Basics
Stuff you should already know
I dont want to waste too much of your time going over the obvious stuff. There is literally a ton of
free information on scrapebox to get you started, and it would be a waste of both of our time to
include it in this e-book. This guide is meant to give you advanced backlinking methods, and
advanced scrapebox uses, and putting those two together to help your sites rank faster and
smoother.
If you dont already know how to use scrapebox, then I urge you to follow these links, and watch and
learn. The tool can be learned in a few days, and mastered in a few weeks, and with this guide you
will have the ultimate advantage over anyone who only has these free resources.
Resources you need before Reading this guide
Official Forum
http://www.scrapeboxforum.com/index.php
Official User Guide
http://www.scrapebox.com/usage-guide/
This is the Official Youtube Channel
http://www.youtube.com/user/scrapebox/
Other Youtube resources that can help you
rintintindy
http://www.youtube.com/user/RinTinTindy
scrapeboxblueprint
http://www.youtube.com/user/ScrapeBoxBlueprint
Im sure there is more out there; in fact I know that there is. However if you have any problems or
issues with scrapebox, Im sure a Google search will tell you all you need to know, for the purpose of
this guide however, lets get going already!
Just before we start, I would like to mention that in any business you will need to invest money. For
example, you have just invested in a guide to teach you scrapebox methods. Within this guide I will
tell you where are good places to invest your money to help you along the way; however I will
always present a free option for you.
You do not need to spend any more money, but if youre serious about Internet Marketing, which
Im sure you are, then you should understand the importance of investing, so dont get upset or
offended If I tell you that its a good idea for you to go and buy some tool, or service.
Dont forget, there is usually a free version for everything, so dont worry.....
...Lets begin!
Scrapebox
Here is scrapebox broken down for you, it might look complicated if youre looking at it for the first
time, but if you think about it in sections then its much easier to deal with. Scrapebox is split into 4
main areas, the harvester, the URL section, the proxy area and the comment poster.
The Tool
Harvester
URLs
Proxies
Proxies down here
Input your info for commenting Here
Comment Poster
Here is some more basic information on each area to help you understand what this tool can do.
Do not expect to be an expert after you read this, this is just meant as a reference and
introduction to the various features.
TIP: When you select check links, when you select it the names field, emails field, and
comments field will be greyed out. All you need to do is put the link you are checking for in
the websites area, and the list of the sites you are checking on.
Anchor Text and the Names Field
You can use the name generator to generate names for the name field, but this field is also used as
an anchor text field. This is because your name is usually used as the link to your site, so if you want
to have the right anchor text for your link you need to write them in the names field. This is usually a
problem for people, because if you are linking to several domains then you want to spin the anchor
text according to those domains. Also you dont want to have to keep changing the names field.
So I have the solution, I always just fill the names field up with generated names, these can always
be the same every time it doesnt matter. Then in the website field, where your links are going to go,
you write the text file like this,
http://www.website1.com {keyword1|keyword2|keyword3}
http://www.website2.com {keyword4|keyword5}
What this will do is spin the keywords in the brackets and use as anchor text for the preceding
website. This means you can link multiple sites with specific anchor texts, and it means you dont
have to input your keywords every time into the names field or have loads of keyword text files
ready for the names field.
Settings Menu
Adjust Maximum Connection
Options Menu
Use Custom User Agents
Edit Custom User Agents
The Add-Ons
The Add-ons are a great section of scrapebox, Before reading this guide, make sure you have
downloaded and installed every add-on, and read what each one is. There is short description in the
available add-ons window, and at this site -http://www.scrapebox.com/addons so I dont need to
explain them here. There will be some add-ons that are used in some of my techniques, so you will
need to know what they are. There are only 23 add-ons and their name are all pretty self
explanatory, but if you dont know what one does exactly, and the short description is not enough,
do a search for the specific addon.
Footprints
You have to have a basic idea of what footprints are and how to use them with scrapebox as most of
the e-book is centred on the use of footprints. A footprint is basically a marking on a website that
makes it unique to other websites. For example every website that says .edu in the url, is going to be
an education/university website. If a website has the keywords dog houses, then that is a footprint
that tells us that the page is related to dog houses. In fact, any text in the entire html coding of a
website can be tracked as a footprint.
Here is an example. I searched in google with this footprint.
powered by wordpress leave a reply dog training.
This brings me all the websites that have those phrases in the html code of the website. Here are
some cut-outs of one of the resulting websites.
Keyword dog training in the title which is perfect for our niche.
The keyword leave a reply was visible,
so we know commenting is allowed.
And of course, its a wordpress site, so we know scrapebox can post to it.
And here are portions of the source code that have the footprints highlighted.
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
<title>Chaar Dog Training goes to the Allentown Pet Expo « Chaar Dog
Training</title>
<!-- Comment Form -->
<div id="respond">
<a name="commentform"></a><!-- named anchor for skip links
-->
<h3 class="reply">Leave a Reply</h3>
At the end of the book I will provide a much more detailed footprint list and you can use that to
scrape anything you want.
This is just a simple example of using keywords to scrape various sites, and there are infinite
possibilities here. However if you want to become a scrapebox master, you need to be aware of
some basic google operators that you can utilize to get the most out of your footprints.
Eventually, you are going to want to produce your own footprints, as these will always be the most
unique, and you can only do this with a basic knowledge of operators to start with, so here are the
operators that you can use.
Search Operator
Meaning
inanchor:keyword
Only the keyword following the operator must appear as an anchor text of a
link pointing towards the page
All words must appear in the text of the pages
Only the keyword following the operator must appear in the text of the page
All query words must appear in the title of the page
The terms must appear in the title of the page.
All query words must appear in the URL.
The terms must appear in the URL of the page.
Searches a site with the domain suffix xxx
Searches for pages that have your keyword in the page.
Shows pages that does NOT have that keyword in the page
Find pages with that exact keyword phrase
Find linked pages, i.e show pages that point to the URL
Search only one website or domain.
allinanchor:keyword
allintext:keyword
intext:keyword
allintitle:
intitle:
allinurl:
inurl:
inurl:.xxx
keyword
-keyword
keyword phrase
link:www.site.com
site:www.site.com
The keyword must appear in the anchor text of links pointing to the page
These are the standard search engine operators, and I will be using them later when we talk about
more complex footprints to use with scrapebox operators.
Also these common operators can be included with any other footprints I give you, you find in the
appendix, or you just pick up. These can be combined, messed around, and used in many varying
different ways. I will continue from the example I used earlier, which is already using three keyword
phrases. Lets see what we can add to that.
I could make the footprint,
That will scrape all the pages indexed by google that are .com, have dog training in the title, are on
the platform wordpress, and give you the option to leave a reply.
I will go through most of the footprints when I teach you the techniques and methods of various
scraping so I wont go into too much detail here, but you should always come back to this section to
see how you can drill down to find what you want with more targeted operators and so you get used
to using them regularly.
Also, I provide a massive list of footprints for you to use at the end of this e-book in the appendix
section, so dont think you are going to have to find all the footprints yourself. Most of the work is
already done for you.
The Merge Feature
This feature isnt really spoken about much in tutorials that I have found on the net, so I will explain
it here and then give you all the resources you need in order to use the merge feature.
Basically, you have your footprints in a text file, with the scrapebox operator %KW% in it
somewhere. (e.g powered by wordpress %KW%). Then you load up keywords in the keyword area
of scrapebox, or scrape keywords, and when you have your keywords, you hit the merge button.
Then load up the text file with the footprint in it with %KW% in the footprint, and the footprint will
be merged to all of the keywords.
You can do this with several footprints in one file, and several keywords. The more combinations of
keywords and footprints, the more results you will get.
Think of the possibilities here, if you have 10 footprints in the text file, and you have 100
keywords, then you can merge 1000 unique footprints, which will all scrape hundreds if not
thousands of results each.
Also you will be able to reuse the text files with your footprint in it with future niches/keywords that
you would like to scrape for.
One more thing, and this is a special treat for you guys, I have attached with this course text files
that are full of footprints, that are all ready for the merge feature.... awesome, I know!
So all you need to do now is input your keywords, and then merge in one or more of the text files
for, forums, blogs, ping sites, edu, gov etc. and the correct footprints will merge with your keywords
instantly.
Before Blasting
Before you go out and blast your site with thousands of links using my methods, make sure you read
through all of them, and get a better understanding of the whole process. If you run through it and
then come back and follow through with the information then you will learn much faster than by
trying each method as you read it.
Also I have included in the After Thoughts section a portion on avoiding the sandbox, and how you
can set up your linking structures so that no blast of links it too much. Here I discuss backlinking
methods like web2.0, link wheels, redirects, aged domains etc.
Do Follow or No Follow
This is a question that I have seen posed in a lot of places, some people say that no-follow is a waste
of time and useless to backlinking, and some say that both are important. Firstly, lets get a few
things straight. No-follow links will not help boost your rankings, since google does not consider
them to be a contributing factor. There are a couple of exceptions that I think should be mentioned.
Yahoo answers and Wikipedia are no-follow, but links from those sites are almost certainly
monitored and given value to by google. However the thousands of other no-follow sources will not
help your rank directly.
Even so, No-follow backlinks are STILL important. Since we are trying to be organic and natural in our
link building, we should always be looking to diversify our backlinking as Im sure many of you know.
Getting no-follow links is just another way of diversifying your backlinking and your footprint, and in
fact there isnt a better way to do it. If you think about it, a website that only has links on do-follow
is a lot more suspicious than one that has a mix of both. So in actual fact, even though no-follow
links may not help you in the ranks, the fact that they increase your footprint and diversify your
links, they indirectly help your rankings by making your other links stronger.
So no-follow is still important, and I would recommend getting around 15%-25% no-follow links to
mix up your footprint and your backlinking when trying to gain authority and rankings to a site.
Also you have to consider that any auto approved blog you post to, whether it has 50 OBL, or 10
OBL, can still be spammed to death in the future, meaning your comment will be lost in a sea of
comments making your Low OBL quality link, a rubbish spammed link on a high OBL page.
Dont forget we are only talking about posting on Auto Approved blogs now. Later on we will talk
about getting quality blog comment links that will stick for a long time and not get spammed.
You have a few options here; you can load up your keywords and merge some footprints to them.
You can load up your keywords and select a blog platform to scrape from. You can load up your
footprints and then input a custom footprint into the footprint area. Or you can hand write all the
footprints with the keywords in the keyword area (which is what merging will do)
Either way, you should end up with a few footprints to be scraping for. Lets say your niche is dog
training, here could be a few possible footprints
After the Harvester has finished, delete duplicate URLs. You can keep harvesting new keywords, or
keywords that werent completed if you would like to keep expanding your list.
After you are happy with a nice long list of domains with a mix of keywords and platforms you need
to do a test blast to the domains with a fake website, for example www.ls089jsdn-90nsdf.com or you
can use a sandboxed domain and hopefully help it out of the sandbox. Fill up the names field with
generated names, and emails with the generated emails. Make some random comments, they dont
need to be anything special, and then post using the fast poster.
Remove Duplicate
Domains and URLs
Run a Blast with a Test
Domain and then export
posted and failed, Submit
to failed again, and repeat.
Check Links, All found links are
auto approved!
After the post is complete export all the posted ones to a text file, posted.txt, and all the failed to
failed.txt.
Next, load up the failed ones in the blogs to comment on, and do the blast again.
Keep repeating the last few steps until you are exporting only failed one, or very few posted ones.
Make sure you save the posted ones as posted1.txt, posted2.txt, posted3.txt etc. However with the
failed ones, you can keep saving over the failed.txt, because you wont need them later.
Next, import and replace the list with all the posted links gathered into one .txt file. Transfer to blogs
list and tick the check links checkbox.
All the links that are found are on auto approved blogs, save this list and you now have an auto
approved blogs list.
Copy this list into notepad++ (you can download it free, check appendix), select all text, then hit
Menu > TextFX Tools > Delete Line Numbers or First Word, and repeat to remove the colon.
Open the scrapebox addon, the backlink checker and load all your URLs. Hit start; When it has found
all your competitors backlinks, download all to save the file containing your backlinks.
Open up the add-on blog analyzer and load your list, hit start, and when its done filter out all bad
blogs and captcha (if you dont have decapcha).
Save the list to the scrapebox harvester, and do a test blast with a fake domain like we did before (or
you can use a sandboxed domain) and then follow the steps in the last method to check the links to
find auto approved blogs.
Export posted and failed ones to separate txt files, and load up failed ones and blast again. Keep
doing this until you only have failed ones. Make sure all your successes are saved as separate txt
files, success1.txt, success2.txt etc.
Gather all the success posts together and bring back to scrapebox to check for links. Use the link
checker and check for the fake domain you made up, the ones that it found are all auto approved.
Thats the second method to building your auto approved blog list.
To take this a tiny step further, when you are looking for your competitors, try a few keywords in
your niche, so you get different competitors, and your auto approved list will be bigger
reload the failed ones, however you can just save over failed.txt because you wont need the old
failed ones.
Once you have gone through the failed one until there are only failed or until you cant be bothered
anymore, group together all the successes and load up into the link checker so you can check your
test links.
Hit start on the link checker and all links that were posted and come back as a success are auto
approved blogs
This method can be expanded by using many spammers sites, on many different blogs, and then
scraping all the backlinks and checking all of them. You can use the merge function and the special
merge ready files that came with this course, or create a merge file yourself.
site:http://sheknows.com/blogs/alytude/?p=3383
A list of all the pages from the site will show. This footprint will give you around 10,000 auto
approved URLs from this one blog. I have included with my guide a list of 5,000 UNIQUE auto
approved blogs, if you can scrape 10,000 pages from 1 blog, you can scrape 50,000,000 urls from the
list I have given you.
Im not saying that it will be that much, but its a start for you at least.... I recommend scraping as
many High pr blogs, scraping those blogs to get all the inner pages, and then trim again based on pr.
Even with all that trimming, you will still get a very nice list that you can blast at any time.
Tip: You can do this on a massive list of auto approved blogs, just load into notepad++ and replace
http, with site:http and they will all get changed, then load that list into the keyword section
and follow the same steps. Or use the merge feature with the correct file.
Tip: If you only have the top level domain of a blog that you want all the pages you can post on,
then use the footprint site:http://www.topleveldomain.com leave a reply.
You must make sure that at least one post page has the footprint leave a reply on it, and you
know others pages you can comment on will also have those words. It might be leave a
comment, or Post a comment or a few possible other variations, so this step of checking is very
important.
Now you can start to harvest using whatever footprint you want, wordpress, movable type or blog
engine, there is a huge list of custom footprints in the appendix section for you to use, so make sure
you have a look.
Save all the harvested URLs into a big list, and remove duplicate urls and duplicate domains. Then
check pagerank, and remove anything below PR3.
If you want you can check for do-follow with the do-follow addon, or with a method I discuss very
soon about finding do-follow blogs in the Finding Do-follow Blogs. You can also check OBL at this
point if you want.
Now prepare some relevant comments about barack obama to post manually on these blogs.
You are writing a comment like I like president obama because blah blah blah, and I like the points
you made here about him and since youre on a page that has barack obama in the url, or in the
title, then your guaranteed to be very targeted and your comment is much more likely to get
approved.
The comment section will explain in more detail how to write comments, and how to spin comments
that are relevant and will get accepted. But for now, lets have a look at another method of finding
High PR blogs to comment on
So we can use that footprint login or register to post comments, to find other blogs that also
require you to login or register.
Scrape with a footprint like this, you can use other blog platforms too.
powered by wordpress login or register to post comments
Make sure you check these because there will be a few that wont be exactly what youre looking
for. You can use the blog analyzer to check, or do it manually.
Your footprint that you will use should be Submitted by Winspire, the other possible combinations
are Winspire says, or by Winspire, Im sure there are other variations, but you will pick them up
the more blogs you end up visiting.
Use the above footprints as keywords for harvesting, and choose custom footprint but leave the
harvester field blank. So your footprint for this technique will be,
submitted by username
posted by username
username says
by username
...other possible variations.
Try to scrape get as many usernames as possible, so you can get more results you can type all this in
manually, or you can use the merge files that came with this e-book. Just load up your keywords and
merge the username method text file.
When you finished harvesting you will have a list of websites where these users have signed up to
blogs with the same username and posted a comment.
Filter out the high PR ones, (most of them will be High PR usually) and now you have a big list of High
PR blogs that you can sign up to and manually post your relevant comment to get a great backlink to
your site, web2.0 page, or redirect.
Since this method is for very high pr links on relevant pages, you dont have to post many, these are
what I call quality links. You dont need many to produce a strong effect.
This technique is great for finding blogs that you have to login into to post, but are auto approved
and allow links in the body of the comment.
rel=nofollow
rel=nofollow
Now the links checker will check for any no-follow attributes, and you want to keep all the blogs
where scrapebox doesnt find anything (i.e failed ones), as you know there are no no-follow
attributes on that page.
Post on a list of harvested blogs with a fake domain, then go back and check links and use this code
in the websites file.
Again, all the failed ones are do-follow, however even though this method isnt 100%, its better than
using the scrapebox add-on as its more accurate and you can check sites other than wordpress. Use
these methods to help find more do-follow blogs, and get better links back to your sites, web2.0
properties, or redirects.
If you find a blog you want to comment on, make sure you check its do-follow by using the seo for
firefox plugin, no-follow links are shown in red. You can also get a plugin called nodofollow, which I
use and I think is a bit better.
Download NoDoFollow here https://addons.mozilla.org/en-US/firefox/addon/nodofollow/
Ok, so you go to one of the do-follow resources above (or in the appendix) to find do-follow blogs to
comment on, then you check to see if the blog is indeed do-follow by confirming with the
nodofollow plugin.
Then you scrape all the pages on that domain using the custom footprint site:dofollowblog.com
(without quotes), then you run the blog analyzer tool to see if you can comment on any of the pages.
You have now built a do-follow blogs list that you can use to get targeted moderated do-follow
backlinks to your site at any time.
To make this method even more powerful, load up a big list of sites that are confirmed do-follow
into the scrapebox keyword area, and scrape all the pages from all the sites... then run blog analyzer
to build your massive list.
Do not put your keywords where it says @yourkeywords.... that is just part of the footprint,
here is an example.
Unlike KeywordLuv, you will need to check the others if they are do-follow. A simple double check
with the donofollow plugin is enough, and its really quick and easy to use.
Tip: The only footprint I ever use is keywordluv, because its do-follow, you can have links in the
post, and there are enough to last me a lifetime. Also always scrape all the pages from a good blog
that you find.
ifollowblue.gif
ifollowgreen.gif
ifollowpink.gif
ifollowpurple.gif
ifollowltgreen.gif
ifolloworange.gif
ifollowwhite.gif
ifollowmagenta.gif
ifollow.(gif/png/jpg, take your pick)
utrackback_ifollow.gif
ifollow-red.png
inurl:ucomment
inurl:ifollow
Whenever you find a domain that is do-follow, scrape all the pages with scrapebox and trim down
with blog analyzer.
And I have noted a couple of other example you can use as well.
So you can create the footprints based around this footprint. Here are a couple of examples.
Finds wordpress blogs that have no comments
"Comments: 1"
"1 comment so far"
"Comments(1)"
There are probably way more footprints out there, but I dont know them all. If you ever come
across blog pages with zero OBL, or one OBL, then check to see if there are any noticeable
footprints, or check across a few pages to see if there is any similarities. If you find any, then scrape
using that footprint and see what you come across.
Now its going to find a bunch of sites related to the best 3 weight loss tips. If you want both terms in
the title then put intitle: before both terms. Use this footprint to scrape blogs by selecting a blog
platform, or by adding a blog footprint.
Since you have scraped a targeted list of blogs, with a specific subject and you know whats in the
title and you know whats in the blog post, you can be sure to write great comments. Here are some
examples,
Thanks for those great weight loss tips, I have been trying to lose weight for a while now, but didnt
know which direction to go. You have really helped thanks again!
wow, awesome tips, thanks a lot., I have been having problems for a while now and your post has
put me on the right path. I will be sending this to a few of my friends who I think will like it too!
Like I said, there are probably hundreds if not thousands of ideas and strings that you could come up
with, and thats for you to figure out in your own niche or with your own ideas. As long as its
targeted, and your comment is targeted, then your good to go.
Google News
This is a way to help find topics of discussion that you could then go and scrape blogs for, that you
could comment on with a targeted and relevant comment, using... Google News!
You go on to google news and find a recent story that looks like it belongs to a certain niche, lets use
weight loss for example again. Lets say someone who is a famous dietician died.
Now you go to scrapebox and harvest blogs using the footprint intitle:famous dietician Then you
can leave a comment saying I heard about his/her death, Im so upset about the loss. My
condolences .
That was just an example but you can do this for any news post that you happen to come across, and
this is a really great way of creating relevant comments that will get accepted because not only is it
targeted, but it is relevant news about the post, blog owners love this as it brings real news content
to their pages... and you get a free link from it.
Spinning comments
Always spin your comments for maximum uniqueness, I use the best spinner and it is the best
spinning software by far. I have included a link in the appendix, but I will give you a link here as well.
You get a trial for $7 so you can test it out. This is an affiliate link....
Here is the link: http://www.ultimatescrapeboxadvantage.com/thebestspinner.html
There are video tutorials and plenty of YouTube vids for you to feast on, and I would say that you
can learn the tool in about a day, and be a pro in about a week.
If you are on a tight budget you can get SpinnerChief.com for free
http://www.spinnerchief.com/soft.aspx?id=S16298 (aff)
If you use both methods above to scrape targeted blogs, then spin relevant targeted comments,
you will see your approval rate shooting up, and you will get loads of awesome backlinks.
You can include a keyword in that footprint as well if you would like, but for the moment, lets just
scrape some habari blogs.
Once you have a list, you will load up the list into the scrapebox manual poster, and also make sure
you have the emails file, comments file, names file, and websites file ready.
It doesnt really matter what is in the names, emails, websites, and comments, because we are not
actually posting, we are just training scrapebox to learn the habari platform.
Hit Start posting, and the manual posting window will pop up.
Then load up the first site on the list, and click Learn and another window will pop up with the first
site showing.
Click Learn
Then click on the Name field, and a box will appear which says define field type, you want to click
the Name field.
Do the same for the email field, the website field, and the comment field and the submit button.
Then name the blog platform habari.
Then hit done, and move to the next website on the list. You have started to teach scrapebox the
habari platform.
It will take a few blogs until scrapebox has fully learned the platform. Load up the second site on
your list and you might see that one or some of the form areas have been filled in. If they are all
filled in then scrapebox has learned the platform, and will not let you click the learn button. If only
some of the form areas are filled in, you will be able to click the learn button.
Click the learn button again, and follow the exact same steps with the next site. After doing this with
a few sites, scrapebox will have learned the platform, and it will fill out all the fields for you. If you
click the learn button again, scrapebox will show you this screen, which means that you have already
taught it the platform.
This technique is very simple but can be used to teach scrapebox different blog platforms, so if you
ever come across a platform that isnt recognised, just teach it to scrapebox and then scrape as
many blogs as you can.
Put that in the custom footprint area, and then add some keywords. To get more results, use more
keywords, your basic forum footprint should be,
Powered by vBulletin In order to proceed, you must agree with the following rules
Then scrape, and check the domain pagerank (not URL), then filter out all the low PR domains, and
you now have a big list of pages where you can just sign up and place your link on a publicly viewable
page ,and you can also add your signature with a few posts on some pages.
You can also scrape other types of blogs, here are some footprints for you to use. I also mention
many more forum footprints in the footprint section in the appendix.
"powered by phpbb"
"powered by punbb"
Powered by PHPbb
Powered by SMF
Powered by expression engine
powered by Simple Machines
Also try some of these strings to get some results too. These help you look for register pages, you
can try these with all the blog types above, or look in the footprints area for more ideas.
There are many more footprints in the appendix, and you can find some yourself by going to forums
and seeing if there is a footprint in the code, or url, or title of a site.
I provide a Merge file that will automatically merge the footprints with any usernames that you put
into the keyword area of scrapebox.
Then just go to the forums that you have just scraped, sign up, and post your link.
can answer a question by saying go to this site, and you can solve an issue someone has by
offering your link as a valuable resource.
Im sure there are more ways, but its important to know that forums have very responsive
communities usually, and they are communities of people that need solutions to their problems.
What better way than that is there for you to make some money?
There is one exception to this rule of forums for traffic, and that is in the internet marketing niche....
and its because we all know what you are doing.... there are plenty of other niches of clueless
individuals out there though, just waiting to be sold to.
Indexing Techniques
Indexing is an important part of backlinking, although these techniques that I discuss here are about
forcing indexing, either using rss, pinging or the rapid indexer. The best method is the RSS method,
as it looks the most natural. The pinging and rapid indexer methods are more dangerous, and
considered spammier, so I will mention them and explain them but not go into too much detail.
RSS
Using RSS is a great way to get your backlinks indexed fast. There are several methods to doing this,
and Im sure you will see other techniques elsewhere, but I will show you what I do, and I think its
the best way to do it.
First import your list of links that you have created into the harvester, either on blogs, forums or
elsewhere. You can also load up your website and inner pages if you want them indexed.
Then export as RSS XML List, make sure you split up the entries so there are 30-40 entries per xml
list.
Insert your list of confirmed links here.
Then a window should pop up and you can start scanning; what this will do is scrape the title and
description for you. When this is done you can export and save it to a folder that you will remember.
Remember to split up the entries so there are around 30-40 entries per xml.
Then upload it to your server via FTP or your Cpanel. If you dont know how to do this then look it up
on google, I recommend software called filezilla, its free, easy to use, and I use it. Go here to
download filezilla, http://filezilla-project.org/
Test the feed by looking up the url of the uploaded feed in your internet browser, it should show a
list of the sites that you have posted your links on.
Copy the URL of the xml feed and go back to scrapebox. Click the rss button, and in the websites text
file you should put your xml feeds, and in the rss services file you should load up the rss services that
you will find in the rss folder where you got with your installation of scrapebox.
Now just start the RSS submission and any failed services, make sure you delete because they will be
a waste of your time in the future.
Select RSS
Pinging
You can use the scrapebox Ping mode to get backlinks to your sites, web2.0 pages, redirects, or
other backlinks to help with indexing. You can generate an almost unlimited number of backlinks
with this technique so its worthy of mentioning in my e-book, however its very spammy, and some
say it does more bad than good.
Many websites use some type of logging or statistics platform such as AWstats to track the visitors
that are going to their websites. These visitor logs that are generated are generally stored on the
sites server. Sometimes google indexes these which results in the referrer sites getting backlinks.
So all you have to do is find indexed sites that can be used with the scrapebox ping mode, then load
up those sites, and the sites that you want pinged, and start pinging.
The footprints to find indexed referrer sites for this technique are noted in the appendix.
Failing doing this yourself, here is a great pinging resource you can use to ping a list of sites,
http://www.pingfarm.com/.
Then check the PR of the domain (not the page) and you have just compiled a list of directories
where you can submit your link on high pr sites, or offer a directory submission service.
Scrapebox can also be used as an email harvester, and even though this method is not completely
ethical, I am including it in this guide for the purpose of a complete guide on the uses of scrapebox
and if you want to use this method then on your head be it!
inurl:craigslist.org keyword
inurl:location.craigslist.org keyword
Use the footprints mentioned in the appendix to scrape a list of URLs, then when you are done click
grab emails, and then select from harvested URL list. There you go, that wasnt hard was it.
site:www.forum-you-scraped.com inurl:memberlist.php
You can use the merge feature with this technique as well to make it easier for you.
Copy this exact text into a text file and save it as memberlist.txt, dont change any of this text.
site:%KW% inurl:memberlist.php
Then load up your list of forum top level domains into the keyword area, and merge the file with the
sites. You should now have a list of footprints that will find all the member pages, on all the forums
that you have found.
Start harvesting, and your results will be all the pages of those forums that have a member list on
them.
Then you can use the link extractor addon to extract all the links on those member list pages. This
will get you all the individual profile pages.
Then on those extracted links (the individual profile pages), use the email extraction option to scrape
any exposed emails.
Intitle:Free+PLR yourniche
Inurl:Free+PLR yourniche
Intitle: Free+PLR+download yourniche
Inurl:Free+PLR+download yourniche
Now Im sure with your incredible newfound knowledge of footprints that you have acquired from
this book, you will understand exactly what the footprints mean and how you can manipulate the,
but this is a good start to finding free plr articles.
Scraping Images
This is an incredible technique and one of my favourite. I use it all the time for everything you could
possibly imagine. You can scrape images for your sites, web2.0 properties, desktop backgrounds,
anything you want really, and here is how you do it.
Copy this list of footprints into a text file, and save as imagesfootprint.txt, somewhere that you will
remember.
inurl:istock -site:istock.com
inurl:shutterstock -site:shutterstock.com
inurl:bigstockphoto -site:bigstockphoto.com
inurl:jupiterimages -site:jupiterimages.com
inurl:dreamstime -site:dreamstime.com
inurl:fotolia -site:fotolia.com
inurl:canstockphoto -site:canstockphoto.com
inurl:inmagine -site:inmagine.com
Then, add the keywords that you want into the keyword section of scrapebox, so if youre looking for
pictures of ferraris, put in the keyword section, Ferrari, best Ferrari, Ferrari cars, Ferrari sports car,
etc.
Then hit the Merge button, and load in your list of footprints. Now there should be your footprints,
followed by the keywords that you wrote in.
Then load up the google image grabber addon. Load up your keywords from the scrapebox keyword
list, and then choose how many photos you want from each keyword, and hit locate image urls.
Then hit select target, and choose where you want your images to be saved, you can also create a
new folder if you wish. When that is done, hit download, and scrapebox will save all the images you
just found into the folder.
You can then browse through at your will. Most of these will be high quality images, and you should
be VERY happy with the results, this is the absolute number one way of finding images online, and I
swear to it.
You will get the odd watermarked image here and there, but you dont have to use those, just use
the ones you like and delete any others, or save your favourites into another folder and delete the
original folder.
Final Thoughts
Avoiding the Box of Sand
The sandbox is a term coined by internet marketing, and it means your site has disappeared of the
serps, and its usually a result of blasting a new site with links. I am going to talk about the sandbox
shortly before going into the various scraping techniques because I think its very important, not only
to avoid the sandbox, but to minimize your footprint and learn some techniques for good backlink
structures. This is the number one fear when it comes to blasting links using scrapebox, or any
extreme amount of link building, and its a perfectly reasonable fear. In my experience this happens
when too many links are built to a domain too fast. And it seems that Google penalizes you for
excessive link building, or what appears to be excessive link building, and they push you to the end
of the queue so to speak. You can be in the sandbox for days or weeks, however I have found that
you always come out stronger, so if you ever have fallen prey to Googles box of sand, then dont
worry, just slow down link building but keep it consistent and you will come back stronger.
I will discuss the methods to avoiding the sandbox here, just so you can get the fear out of the way,
and so you know that there are other options available and then we will go into the various
scrapebox techniques.
Aged Domains
If you use aged domains, you can rank faster for keyword terms that might be more competitive.
Also the fact that it is an aged domain means you can blast links to it and never worry about the
sandbox, resulting in faster ranking.
Now, dont get me wrong, exact match domains for example www.yourkeyword.com (no hyphens),
will rank extremely fast for keyword that have less than 50,000 competing pages, but any more
competition than that and you should look into getting an aged domain.
If your competition in the top ten results all have aged domains, then its much more important to
get an aged domain.
Where to get Aged domains
My number one place to get aged domains is register compass . And here is a link to the site
http://www.registercompass.com. There are other free options to finding domains, but Im sure
your skills of research will help you in that respect. Try domainface to start, or search in google find
aged domain names, here is another free site to use; http://www.aged-domainfinder.com/search.php. Also, look out for Terry Kyles forthcoming
http://domainbuyingblackbelt.com/ that can help you look for and buy domains.
If youre using Register compass, you want to filter your search to find domains that are older than 4
years, and are expiring/on auction in godaddy. Try and get as many domains as you can into your
auction watching list on godaddy, and then when you see a domain that has only a few minutes or
hours left but no one has bid on it, you can grab it for $10. You might want to check the domain on
thewaybackmachine to validate its age. Usually the more aged the domain is, the higher you will
pay, but I have used this method many times to get aged domains for cheap.
When you have your aged domain setup (it takes a week after you win the bid) you can build the site
with the on page SEO that you need, and blast it with links.
No sandbox for you!
New Domains
When working with new domains, you dont want to blast the domain with too many links because
you might end up in the sandbox, however with the following methods it wont be a problem for
you.
Firstly, you are going to want to start building links consistently and slowly directing to your new
money site. You can use scrapebox to find .edu/gov links, high pr blogs and forums, and make a few
links every day, but only a few. No blasts of thousands of links from scrapebox... no, you dont want
to visit the box of sand do you? I discuss later in this guide how you can find high pr forums, blogs,
and .edu/.gov links.
So, with a new domain, make sure you build good links, but few of them, a maximum of 50 per week
straight to the money site, however here are a couple of ways that you CAN use scrapebox to blast
links to a new domain without getting sandboxed, here are those methods.
The Redirect Method
This is an easy and brilliant method to avoid the sandbox. Buy a cheap domain, my favourites are
.info domains (buy for a couple of bucks) and redirect this website with a .301 redirect straight to
your money site. This acts as a proxy that stops your site being sent to the sandbox.
Once you have setup your redirect to your money site, you can blast the .info domain with links, and
all the ling juice gets redirected to your money site, but your site is protected from the sandbox.
To expand on this method to the max, you can do what I do. Buy twenty .info domains, redirect all to
your money site, and then start blasting each one, or all together. This link building power on a new
site is amazing, and you will be astonished with the results.
The Web2.0 Method
This method is easy, and fun because of all the possibilities. You build a web2.0 property (I have a list
of them in the appendix section) and fill it up with some filler content you can find. (See my finding
free PLR articles section in this e-book) Then place your link on the web2.0 property, and point it to
your money site using your anchor text.
Then blast the web2.0 property with scrapebox. This method is not as powerful as the redirect
method, however it works great to stop you going to the sandbox, and it creates very powerful
web2.0 links to your money site.
Link Wheels
You can also use the Web2.0 technique with link wheels which is an incredibly powerful technique
when used in this way. Here is a diagram to illustrate what Im saying.
Link Columns
This is another way of setting up your linking structure. These are also called closed link wheels,
because they are exactly the same as a link wheel, but with one web2.0 to web2.0 link missing.
Appendix Section
Important links/Resources/Tools
Guides
User guide
http://www.scrapebox.com/usage-guide/
Official YouTube channel
http://www.youtube.com/user/scrapebox/
Other YouTube channels
http://www.youtube.com/user/RinTinTindy
http://www.youtube.com/user/ScrapeBoxBlueprint
Tools
Notepad++ http://notepad-plus-plus.org/
Forum profile creator - http://www.mediafire.com/?3p9c00otb9i25c6
Forum profile creator thread - Thanks to Crazyflx
Aged domains
http://www.aged-domain-finder.com/search.php
http://www.registercompass.com/
http://www.archive.org/web/web.php
Proxies
www.yourprivateproxy.com/413.html
Dofollowblogs lists/resources
http://www.whydowork.com/blog/blogging-tips/558/
http://www.myseoblog.net/2008/06/21/lists-and-directories-of-dofollow-blogs/
http://nicusor.com/do-follow-list/
http://www.jimkarter.com/list-of-dofollow-blogs
http://w3ec.com/dofollow/
RSS
http://links2rss.com/
http://www.feedlisting.com/submit.php
Pinging
http://www.pingfarm.com/
Forums
Finding Spammers usernames: http://stopforumspam.com/
http://blogskinny.com
http://blogstream.com
http://blogwebsites.net
http://blurty.com
http://clearblogs.com
http://www.easyjournal.com
http://free-conversant.com
http://freeflux.net
http://opendiary.com
http://sosblog.com
http://tabulas.com
http://terapad.com
http://thoughts.com
http://upsaid.com
http://viviti.com
PR4
http://blogeasy.com
http://bloghi.com
http://bloghorn.com
http://blogigo.com
http://blogono.com
http://blogr.com
http://blogstudio.com
http://blogtext.org
http://bloxster.net
http://freeblogit.com
http://insanejournal.com
http://journalfen.net
http://journalhub.com
http://mynewblog.com
http://netcipia.com
http://shoutpost.com
http://thediary.org
http://wikyblog.com
Footprints Continued
Here is a massive list of footprints that you can use for a multitude of different situations. Please
note that I will never be able to list all the footprints, because there are going to be more and more
discovered every day, I know that I am always finding more, and so will you.
Use these footprints in the custom footprint section, with your keywords below, or use them in the
keyword section with your keywords already in the footprints if you want to use more than one at a
time. Or you can use the merge feature with a list of keywords, which you should know about from
reading the footprints section at the beginning of this e-book.
Dont forget, that whatever footprint you are using, you can always add the operators that you
learned at the beginning of this e-book. For example you can add a keyword phrase to any
footprint, to only find those sites with that keyword phrase.
Also, please note that these are not all my creations, I have built this list of footprints from so many
sites that I cant even remember, but here they all are packaged together just for you.
Blogs
powered by wordpress
Powered by BlogEngine
Powered by Blogsmith
powered by Typepad
powered by scoop
powered by b2evolution
powered by ExpressionEngine
Things to add
"leave a comment"
"leave a reply"
"reply"
comment
Forums
Powered by PHPbb
Powered by vBulletin
Powered by SMF
powered by Simple Machines
inurl:/index.php?action=register
powered by punBB
powered by expressionengine
inurl:/member/register/ "powered by expressionengine"
Powered by SMF inurl:"register.php"
Powered by vBulletin inurl:forum
Powered by vBulletin inurl:forums
Powered by vBulletin inurl:/forum
Powered by vBulletin inurl:/forums
Powered by vBulletin inurl:"register.php"
act=post&forum=19
forums/show/
module=posts&action=insert&forum_id
posts/list
/user/profile/
/posts/reply/
new_topic.jbb?
"powered by javabb 0.99"
login.jbb
new_member.jbb
reply.jbb
/cgi-bin/forum/
cgi-bin/forum.cgi
/registermember
listforums?
"forum mesdiscussions.net
version"
index.php?action=vtopic
"powered by forum software minibb"
index.php?action=registernew
member.php?action=register
forumdisplay.php
newthread.php?
newreply.php?
/phorum/
phorum/list.php
"this forum is powered by phorum"
phorum/posting.php
phorum/register.php
phpbb/viewforum.php?
/phpbb/
phpbb/profile.php?mode=register
phpbb/posting.php?mode=newtopic
phpbb/posting.php?mode=reply
/phpbb3/
phpbb3/ucp.php?mode=register
phpbb3/posting.php?mode=post
phpbb3/posting.php?mode=reply
/punbb/
punbb/register.php
"powered by phpbb"
"powered by punbb"
/quicksilver/
"powered by quicksilver forums"
index.php?a=forum
index.php?a=register
index.php?a=post&s=topic
/seoboard/
"powered by seo-board"
seoboard/index.php?a=vforum
index.php?a=vtopic
/index.php?a=register
"powered by smf 1.1.5"
"index.php?action=register"
/index.php?board
"powered by ubb.threads"
ubb=postlist
ubb=newpost&board=1
"ultrabb"
view_forum.php?id
new_topic.php?
login.php?register=1
"powered by vbulletin"
vbulletin/register.php
/forumdisplay.php?f=
newreply.php?do=newreply
newthread.php?do=newthread
"powered by bbpress"
bbpress/topic.php?id
bbpress/register.php
add topic
new topic
phpbb
yabb
ipb
posting
add message
send message
post new topic
new thread
send thread
vbulletin
bbs
intext:"powered by vbulletin"
intext:"powered by yabb"
intext:"powered by ip.board"
intext:"powered by phpbb"
inanchor:vbulletin
inanchor:yabb
inanchor:ip.board
inanchorhpbb
/board
/board/
/foren/
/forum/
/forum/?fnr=
/forums/
/sutra
act=reg
act=sf
act=st
bbs/ezboard.cgi
bbs1/ezboard.cgi
board
board-4you.de
board/ezboard.cgi
boardbook.de
bulletin
cgi-bin/ezboard.cgi
invision
kostenlose-foren.org
kostenloses-forum.com
list.php
lofiversion
modules.php
newbb
newbbs/ezboard.cgi
onlyfree.de/cgi-bin/forum/
phpbbx.de
plusboard.de
post.php
profile.php
showthread.php
siteboard.de
thread
topic
ubb
ultimatebb
unboard.de
webmart.de/f.cfm?id=
xtremeservers.at/board/
yooco.de
forum
phorum
add topic
new topic
phpbb
yabb
ipb
posting
add message
send message
post new topic
new thread
send thread
vbulletin
bbs
cgi-bin/forum/
/cgi-bin/forum/blah.pl
"powered by e-blah forum software"
"powered by xmb"
/forumdisplay.php?
/misc.php?action=
member.php?action=
"powered by: fudforum"
index.php?t=usrinfo
/index.php?t=thread
/index.php?t=
index.php?t=post&frm_id=
"powered by fluxbb"
/profile.php?id=
viewforum.php?id
login.php
register.php
profile.forum?
posting.forum&mode=newtopic
post.forum?mode=reply
"powered by icebb"
index.php?s=
act=login&func=register
Directories
Powered by: php Link Directory
powered by PHPLD
Powered by WSN Links
powered by PHP Weby
Powered by cpLinks
Powered by cpDynaLinks
powered by BosDirectory
Powered by Link manager LinkMan
Powered by Gossamer Links
Powered by K-Links
Powered by In-Link
Powered by eSyndiCat Directory Software
Powered by: qlWebDS Pro
Powered by Directory software by LBS
powered by phpMyDirectory.com
Powered by HubDir PHP directory script
Ping Mode
"Generated by Webalizer Version 2.01"
"Generated by Webalizer Version 2.02"
'Generated by Webalizer Version 2.03"
"Generated by Webalizer Version"
"Created by awstats"
'Advanced Web Statistics 5.5"
"/webalizer/usage"
"/usage/usage"
"/statistik/usage"
"/stats/usage"
"/stats/daily/"
"/stats/monthly/"
"/stats/top"
"/wusage/"
"/logs/awstats.pl"
"/webstats/awstats.pl"
"/awstats.pl"
inurl:/usage_;103
inurl:/awstats.pl?lang=;23
inurl:/awstats.pl?config=;11
inurl:/awstats.pl?output=;9
usage statistics "Summary Period: february 2009" - (put last month here so you know that google has
indexed it)
usage statistics "Summary Period: march 2009"
Generated by Webalizer
inurl:awstats.pl intitle:statistics
Created by awstats
inurl:usage_200811 html
produced by wusage
inurl:twatch/latest html
inurl:stats/REFERRER.html
.edu/.gov Blogs
inurl:.gov+inurl:blog
site:.edu inurl:wp-login.php +blog
site:.gov inurl:wp-login.php +blog
site:.edu inurl:wp-admin +login
site:.edu inurl:blog post a comment
site:.edu inurl:blog post a comment comments closed -you must be logged
in
keyword
site:.edu no comments +blogroll -posting closed -you must be logged in comments are closed
site:.gov no comments +blogroll -posting closed -you must be logged in comments are closed
inurl:(edu|gov) no comments +blogroll -posting closed -you must be logged
in comments are closed
site:.edu inurl:blog comment -you must be logged in -posting closed comment
closed
keyword
keyword blog site:.edu
keyword +inurl:blog site:.edu
.edu/.gov Forums
edu inurl:login (Create an account)
site:edu powered by vbulletin
inurl:.edu/phpbb2
inurl:.edu/ (Powered by Invision Power Board)
site:edu powered by SMF
Email Harvesting
inurl:craigslist.org keyword
inurl:location.craigslist.org keyword
Comment Footprints
I am impressed by the quality of information on this website. There are a lot of good
resources here. I am sure I will visit this place again soon.
Very useful info. Hope to see more posts soon!
Great blog post. Its useful information.
Hi, Ive been a lurker around your blog for a few months. I love this article and your entire
site! Looking forward to reading more!
I am impressed by the quality of information on this website. There are a lot of good
resources here. I am sure I will visit this place again soon.
Useful info. Hope to see more good posts in the future.
Nice job, its a great post. The info is good to know!
Top post. I look forward to reading more. Cheers
I am very enjoyed for this blog. Its an informative topic. It help me very much to solve some
problems. Its opportunity are so fantastic and working style so speedy. I think it may be help
all of you. Thanks.
this is very interesting. thanks for that. we need more sites like this. i commend you on your
great content and excellent topic choices.
BestAntivirusSoftware.co.nz is New Zealands No. FREE"
You really know your stuff... Keep up the good work!
This is a really good site post, im delighted I came across it. Ill be back down the track to
check out other posts that
Really cool post, highly informative and professionally written..Good Job
Then more friends can talk about this problem
You did the a great work writing and revealing the hidden beneficial features of
I had to refresh the page times to view this page for some reason, however, the
information here was worth the wait.
This is a really good read for me. Must agree that you are one of the coolest blogger I ever
saw. Thanks for posting this useful information. This was just what I was on looking for. I'll
come back to this blog for sure!
I admire what you have done here. I love the part where you say you are doing this to give
back but I would assume by all the comments that is working for you as well. Do you have
any more info on this?
Thanks for informative post. I am pleased sure this post has helped me save many hours of
browsing other similar posts just to find what I was looking for. Just I want to say: Thank
you!
Dude.. I am not much into reading, but somehow I got to read lots of articles on your blog.
Its amazing how interesting it is for me to visit you very often. This is my first time i visit here. I found so many entertaining stuff in your blog, especially its
discussion. From the tons of comments on your posts, I guess I am not the only one having
all the enjoyment here! Keep up the excellent work.
Excellent read, I just passed this onto a colleague who was doing a little research on that.
And he actually bought me lunch because I found it for him smile So let me rephrase that.
"Its always good to learn tips like you share for blog posting. As I just started posting
comments for blog and facing problem of lots of rejections. I think your suggestion would be
helpful for me. I will let you know if its work for me too."
'Thank you for this blog. That's all I can say. You most definitely have made this blog into
something thats eye opening and important. You clearly know so much about the subject,
youve covered so many bases. Great stuff from this part of the internet. Again, thank you for
this blog."
Apple iPod World provides free information, reviews on all products related to the Apple
iPod, these include the iPod Classic, Touch, Nano and Shuffle.
Multicast Wireless is a mission-based, cutting edge, progressive multimedia organization
located in Huntsville, Alabama.
Nice Website. You should think more about RSS Feeds as a traffic source. They bring me a
nice bit of traffic
Useful information shared..Iam very happy to read this article..thanks for giving us nice
info.Fantastic walk-through. I appreciate this post.
I agree with your thought.Thank you for your sharing.
this is something i have never ever read.very detailed analysis.
That's great, I never thought about Nostradamus in the OR
Subsequently, after spending many hours on the internet at last We've uncovered an
individual that definitely does know what they are discussing many thanks a great deal
wonderful post
Nice Post. Its really a very good article. I noticed all your important points. Thanks"
I think so. I think your article will give those people a good reminding. And they will express
thanks to you later
Thanks for the nice blog. It was very useful for me. Keep sharing such ideas in the future as
well. This was actually what I was looking for, and I am glad to came here! Thanks for sharing
the such information with us
I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you
bookmarked to check out new stuff you post
site:.edu "ipb"
site:.edu "posting"
site:.edu "add message"
site:.edu "send message"
site:.edu "post new topic"
site:.edu "new thread"
site:.edu "send thread"
site:.edu "vbulletin"
site:.edu "bbs"
site:.edu "cgi-bin/forum/"
site:.edu "/cgi-bin/forum/blah.pl"
site:.edu "powered by e-blah forum software"
site:.edu "powered by xmb"
site:.edu "/forumdisplay.php?"
site:.edu "/misc.php?action="
site:.edu "member.php?action="
site:.edu "powered by: fudforum"
site:.edu "index.php?t=usrinfo"
site:.edu "/index.php?t=thread"
site:.edu "/index.php?t="
site:.edu "index.php?t=post&frm_id="
site:.edu "powered by fluxbb"
site:.edu "/profile.php?id="
site:.edu "viewforum.php?id"
site:.edu "login.php"
site:.edu "register.php"
site:.edu "profile.forum?"
site:.edu "posting.forum&mode=newtopic"
site:.edu "post.forum?mode=reply"
site:.edu "powered by icebb"
site:.edu "index.php?s="
site:.edu "act=login&func=register"
site:.edu "act=post&forum=19"
site:.edu "forums/show/"
site:.edu "module=posts&action=insert&forum_id"
site:.edu "posts/list"
site:.edu "/user/profile/"
site:.edu "/posts/reply/"
site:.edu "new_topic.jbb?"
site:.edu "powered by javabb 0.99"
site:.edu "login.jbb"
site:.edu "new_member.jbb"
site:.edu "reply.jbb"
site:.edu "/cgi-bin/forum/"
site:.edu "cgi-bin/forum.cgi"
site:.edu "/registermember"
site:.edu "listforums?"
site:.edu "forum mesdiscussions.net"
site:.edu "version"
site:.edu "index.php?action=vtopic"
site:.edu "powered by forum software minibb"
site:.edu "index.php?action=registernew"
site:.edu "member.php?action=register"
site:.edu "forumdisplay.php"
site:.edu "newthread.php?"
site:.edu "newreply.php?"
site:.edu "/phorum/"
site:.edu "phorum/list.php"
site:.edu "this forum is powered by phorum"
site:.edu "phorum/posting.php"
site:.edu "phorum/register.php"
site:.edu "phpbb/viewforum.php?"
site:.edu "/phpbb/"
site:.edu "phpbb/profile.php?mode=register"
site:.edu "phpbb/posting.php?mode=newtopic"
site:.edu "phpbb/posting.php?mode=reply"
site:.edu "/phpbb3/"
site:.edu "phpbb3/ucp.php?mode=register"
site:.edu "phpbb3/posting.php?mode=post"
site:.edu "phpbb3/posting.php?mode=reply"
site:.edu "/punbb/"
site:.edu "punbb/register.php"
site:.edu "powered by phpbb"
site:.edu "powered by punbb"
site:.edu "/quicksilver/"
site:.edu "powered by quicksilver forums"
site:.edu "index.php?a=forum"
site:.edu "index.php?a=register"
site:.edu "index.php?a=post&s=topic"
site:.edu "/seoboard/"
site:.edu "powered by seo-board"
site:.edu "seoboard/index.php?a=vforum"
site:.edu "index.php?a=vtopic"
site:.edu "/index.php?a=register"
site:.edu "powered by smf 1.1.5"
site:.edu "index.php?action=register"
site:.edu "/index.php?board"
site:.edu "powered by ubb.threads"
site:.edu "ubb=postlist"
site:.edu "ubb=newpost&board=1"
site:.edu "ultrabb"
site:.edu "view_forum.php?id"
site:.edu "new_topic.php?"
site:.edu "login.php?register=1"
site:.edu "powered by vbulletin"
site:.edu "vbulletin/register.php"
site:.edu "/forumdisplay.php?f="
site:.edu "newreply.php?do=newreply"
site:.edu "newthread.php?do=newthread"
site:.edu "powered by bbpress"
site:.edu "bbpress/topic.php?id"
site:.edu "bbpress/register.php"
site:.edu "powered by the unclassified newsboard"
site:.edu "forum.php?req"
site:.edu "forum.php?req=register"
site:.edu "/unb/"