WWW - Simplybasics.in Site Audit Report
WWW - Simplybasics.in Site Audit Report
nirmalaya.com 3
.
phool.co 3
.
www.forestessentialsindia.com 3
.
www.soulflower.in 2
.
ancientliving.in 1
.
aromamagic.com 1
.
arotatvika.in 1
.
auradepurity.com 1
.
cycle.in 1
.
deyga.in 1
.
Critical technical errors appear in the websites code and HTML markup. They confuse search engine robots by
blocking them from scanning all necessary information on the site. These errors make the site vulnerable to
competitors and reduce rankings.
Other technical errors appear in the site software and HTML markup. These errors do not affect the site ranking
directly. However, they may cause visual problems on the site: incorrect displaying of content, layout or usability
issues on mobile devices. These errors are likely to cause a decrease in user-experience and will likely be reflected in
the behavioral metrics of the site.
Warnings 283 .
Warnings are issues that could require immediate attention or maybe innocuous dependent on your website. These
warnings may require the help of a specialist - you can contact our support team to get support straight away if you
believe any of these warnings are dangerous for your website.
Notices 0 .
Notices are things that our crawler has noticed during the crawling of your site. These are not errors, nor warnings,
but instead useful bits of information our crawler has noticed while crawling your site.
SSL (Secure Sockets Layer) is the standard security technology for establishing an encrypted link between a web
server and a browser. Google now uses HTTPS as a ranking signal. Websites that use HTTPS will have a slight ranking
advantage if HTTPS is enabled.
If the SSL certificate has expired, it issues a warning like this: "Potential Security Risk Ahead". If a person visits an
“unsafe” website URL, they would be greeted with a message stating that the site is not secure via the browser, and
that they should leave the site to remain safe. Expired certificates can result in a number of negative consequences
for the website owner: most likely, lots of potential customers will bounce from the site (leave the site) due to the
insecure site messages and thus are likely to lose this business to their competitors’. If too many website visitors
bounce from your site — which is likely in this particular example — Google will inevitably demote your website in
the SERPs due to poor bounce rates and satisfaction of query intent.
Self-signed certificate No .
You need a Trusted CA Signed SSL Certificate to get the green lock & secure sign on Google Chrome. You also need
this to get the small Google ranking boost of having HTTPS. A self-signed certificate can be generated directly on
anyone’s webserver, so it has no value for search engines and browsers will show an error. Self-signed certificates
are not acceptable for public sites because every user will see the notice that the certificate is invalid due to it being
self-signed.
SSL certificates are for one specific domain. It cannot be used for other domains. The browser will warn visitors if
used on multiple sites and inform the user that said site is dangerous. This will increase the bounce rate of the site.
The search engines will also check the certificate, and if they notice an SSL certificate being used without the domain
name listed, then they are likely to demote the site until the issue is fixed.
If an SSL certificate is not confirmed by the registration center a browser will display a mark about the danger and
scare users away. The search engines also check the certificate, and if a problem arises, the website will lose
positions in the search results in a short time.
There is no use in buying SSL if search engines and users still visit the site via HTTP. You need to redirect all of your
traffic via a 301 redirect in .htaccess from the HTTP version of your site (unprotected and unencrypted) to the HTTPS
version of your website (protected and encrypted).
If your server is configured incorrectly, the port 443 may appear in the URL. This does not look good to users and
may confuse them about the name of your brand and how to access your site. An example of this would be
https://example.com:443 which is confusing and does not look as nice as https://example.com.
RBL / DNSBL databases are blacklists of IP addresses that are commonly used while engaged in spam. A site can be
blacklisted after receiving multiple complaints about spam. If you use shared hosting with a shared IP address, then
your IP address could be blacklisted because of other malicious spammers on your shared hosting using that IP
address. Blacklisted IP addresses are untrustworthy in the eyes of search engines and mail servers may mark
incoming e-mail messages from your domain as “spam” or even block them entirely. You do not want your IP to be
blacklisted, and if it is, you need to arrange for a new dedicated IP address from your hosting provider.
Mixed content is a warning that will appear in your browser when you have content loading through both HTTPS and
HTTP.
Typically, the initial HTML page will load through a secure HTTPS:// connection, but other resources such as images,
stylesheets, scripts, and so on, are loaded over an HTTP:// connection.
This leaves the content that isn't secured through the HTTPS connection vulnerable to attack by 'sniffers' or 'man-in-
the-middle attacks'.
A sniffer attack refers to data being stolen or intercepted as it travels across the data network. This is only possible
when the data isn't encrypted (as is done in HTTPS). A man-in-the-middle attack is similar in that it is a type of
eavesdropping attack, where the attackers interrupt an existing conversation or data transfer. Yet again only made
possible by unencrypted data.
A compromised domain is a domain name that has been hacked or hijacked by cybercriminals for malicious
purposes, such as hosting malware, phishing, spam, or ransomware. Linking to compromised domains can have
serious consequences for your SEO and website reputation.
The <iframe> HTML element represents a nested browsing context, embedding another HTML page into the current
one.
The <iframe> element may be a security risk if hostile site is embedded inside an IFRAME on your site too.
If someone compromises a site that is in a frame, then they can conceivably compromise the integrity of your site.
A malicious site can use an iframe to exploit a vulnerable site via CSRF.
Therefore you need to pay attention when adding an iFrame from an untrusted site website.
If you have https enabled you can have 4 versions of your site the user can access (https://www, https://,
http://www, http://) or two versions of your site without https (http://www, http://). This is not good for user
experience and may negatively affect indexing if you leave your site accessible via multiple URLs. You want to have
just one way to access your site (https://www. or https://) and you can achieve this using the canonical tag and by
301 directing all other versions of your site.
This report shows what different countries are allowed to view your website. For example, if you have a US company
you may have blocked access to EU countries and visitors to avoid GDPR regulations.
Document requests to your server by our crawler should return the same document size - this is the expected
behavior of a server.
If there is a discrepancy in document size then both our crawler and your users are likely receiving different versions
of your pages.
If this is the case, then you shouldn't be surprised when you open the page and see different content to what our
crawler is showing. The most likely cause of this discrepancy is that your page contains dynamic elements that are
constantly in flux.
A difference in document size can also be attributed to malicious code being present on your site (although less
common). In such cases, the malicious code is shown only to certain users and search engines, whereas other users
(like yourself) see the original code and are none the wiser to the ‘injection’ of the malicious code into your pages. If
malicious code is the reasoning for the change in document size, then it is likely the infected pages will be showing
malicious code to the search engines which can hurt your rankings substantially. Malicious code normally results in
cloaked links which is heavily penalized by Google and other search engines.
Users should always see your website after they enter your address. The only exception to this is if you are 301
redirecting your old site to your new site. Apart from this, if your* website is redirecting to another site that you
have not specifically set a 301 to, then your site has been hacked and infected with malware or a virus.
New domains (younger than 6-12 months) are considered the hardest to promote because they have not been
around long enough to establish authority and trust in the eyes of search engines.
Registry Expiration
The NS record (nameservers) points to the DNS server that the domain is hosted with.
IP address 13.126.188.247 .
Error codes: 500, 503, 504 are all internal server errors. These errors appear due to the incorrect writing of the
system file or other problems on your server. 5xx errors prevent the indexing of the site and search engines reduce
the rankings of sites with a large number of 5xx errors.
Our crawler received an unanswered error when crawling. This can happen when a crawler times-out when crawling
a page (a crawler only has a set amount of time it can spend on one page) or another network related issue when
requesting a page from your website. For example, your server may have been down or too busy to handle our
requests when our crawler tried to access a certain page.
Incorrect permissions on a PHP and CGI script (etc.) usually cause these. PHP warnings and notices help developers
debug issues with their code. However it provides for a poor browsing experience when they are visible to all your
website visitors. If users find you on Google, click through onto your page and then leave quickly this contributes to a
poor bounce rate (the rate at which users leave your site quickly after visiting) which negatively affects your on site
conversions and rankings on Google.
A 404 error is an HTTP status code that indicates that the page you were trying to reach on a website could not be
found on their server. You need for your server to correctly show 404 responses if the page cannot be found or it will
seriously harm indexing and search rankings.
Domain bonding is a server configuration that redirects users to the main website address when they enter a
website address with "www" and "without www", with "http://" or "https://". This parameter allows you to transfer
traffic and PR metrics to the main website address. Pages without bonding are considered duplicate, and therefore,
receive reduced rankings.
4xx errors 0 .
A 404 error occurs when a user requests a page that does not exist on the site. It happens when the page has been
removed from the site, or its URL has been changed. Search engines do not reduce the position of the site with a
small number of these errors. However, if the site has a large number of pages with 404 errors, then the search
engines may take note of this as evidence of a poorly maintained site and demote you accordingly.
Note: Our crawler may sometimes receive a 404 error from a page that loads correctly for the user and has no visual
issues, but is showing as a 404 non-existent page to our crawler. If this occurs, then we would recommend for you to
check the response codes for these pages manually and report any false positives.
A 404 error occurs when a user requests a page that does not exist on the site. It happens when the page has been
removed from the site, or its URL has been changed. Search engines do not reduce the position of the site with a
small number of these errors. However, if the site has a large number of pages with 404 errors, then the search
engines may take note of this as evidence of a poorly maintained site and demote you accordingly.
Note: Our crawler may sometimes receive a 404 error from a page that loads correctly for the user and has no visual
issues, but is showing as a 404 non-existent page to our crawler. If this occurs, then we would recommend for you to
check the response codes for these pages manually and report any false positives.
JavaScript on your site is essential for visual formatting and for complex functions on your site. If you see any errors
here for JavaScript files not loading correctly, it means that your users may be getting an unformatted version of
your site with broken features.
CSS on your site is essential for the visual formatting of your site and makes it readable for users. If you see any
errors here for CSS files not loading correctly it means that your users may be getting FOUC (flash of unstyled
content) or no visual formatting at all.
Broken images 3 .
Broken image URLs cause images to load incorrectly and spoils user-experience.
The anchor text is the visible, clickable text in an HTML hyperlink. If there is no anchor text and image in the link, this
is probably a technical error.
robots.txt Yes .
The robots.txt file helps to control the indexing (reading) of a site by search robots. Think of it as a tour guide for the
search engines, telling them the best places to visit and the places to never visit. It is a list of instructions for search
engines - you can block files, pages and directories of the site from being indexed by search engine bots. If the
robots.txt file is missing, search engines will read all pages of the site by default. Search engine robots (crawlers) will
index and crawl every page of your site unless a "robots" meta-tag is found in the html code of the page instructing
them not to index a specific page.
Robots.txt errors 13 .
If your robots.txt file contains any errors then it can cause 1 of 2 things:
Search engine robots (crawlers) will not be able to read the instructions and rules correctly and will default
to indexing all content on the site and ignore any rules after the error.
Crawlers may be blocked from indexing the entirety of the site (equal to setting your entire site to noindex)
and it will not show in the search engines.
A Sitemap.xml file is like a human map but for search engines. It tells the search engines where to find the best
pages, how often to check them, and other pages they recommend the search engine to browse and index. Although
it is not a mandatory for your site to have a sitemap.xml file, having one assures that the search engine crawlers
know the locations of your pages and how frequently they should visit these pages. Detailed information about using
the sitemap.xml file is available here: https://sitemaps.org.
crawling
Sitemap.xml is a file that helps search engines to quickly get the addresses of site pages, the time/frequency of their
last update and page priority.
Your Sitemap.xml file has pages that redirect search robots to pages with 404 errors or duplicate pages. The more
sitemap errors you have, the more rankings will be reduced by search engines.
Sitemap.xml is a file that helps search engines to quickly get the addresses of site pages, the time/frequency of their
last update and page priority.
This report lists the pages that are opened for indexing but are missing from the sitemap.
http://www.simplybasics.in/product-list/9 200 34 No No
http://www.simplybasics.in/cancellation-and-refund-policy 200 34 No No
http://www.simplybasics.in/disclaimer 200 34 No No
http://www.simplybasics.in/blogs-list/ 200 0 No No
http://www.simplybasics.in/product-list/3 200 34 No No
http://www.simplybasics.in/product-list/12 200 34 No No
http://www.simplybasics.in/what-we-do 200 34 No No
http://www.simplybasics.in/best-deals/ 200 0 No No
http://www.simplybasics.in/product-list/11 200 34 No No
http://www.simplybasics.in/product-list/10 200 34 No No
Sitemap.xml is a file in a special format for search engines that helps to quickly get the addresses of site pages, the
time of their last update, the frequency they should be checked, and the sites hierarchy. This report lists pages that
are blocked from indexing but are present in the sitemap. This can lead to the indexation of unwanted information.
If the sitemap contains extra pages, the search engines may penalize the site, and lower the search rankings.
The search results may reveal confidential information.
Indexing of extra pages wastes crawler budgets and stops the pages you want from being indexed/re-
crawled by search engines.
This shows how many sitemap files were found on the site. Sitemap files contain a list of pages and other site
resources to be indexed. This information helps search engines index the site more efficiently. The standard
adopted for the sitemap allows the use of many sitemap files. Please note that sitemap files may not be found if
their paths do not comply with the sitemap protocol standard or have spelling errors.
videos, etc)
This report shows the number of HTML pages and other resources found in all the sitemap files at the time of site
analysis.
This report shows how many pages in HTML format were found in all the sitemap files at the time of the site
analysis. This doesn't include resources in other formats such as images, etc. This is a useful report for comparing
the number found in the sitemap files vs the actual number of pages on the site and indexed in the search engines.
This may help to detect various problems, quickly. For example, some sites may not have removed pages from the
sitemap that have been deleted or disabled in the site's administrative panel. This can therefore lead to a loss in
crawling budget, and potentially a decrease in the site's position. If there are significantly fewer pages in the search
engine index than in the sitemap it may also indicate that the site has issues with indexing. Or that the sitemap is
formatted incorrectly and contains extra pages.
Errors in the sitemap can lead to incorrect interpretation of data and the inability to use the entire file or individual
lines within. We check the sitemap for compliance with sitemap, XML, w3c standards, as well as Google, Yahoo and
Bing recommendations.
Warnings indicate that there are problems that will significantly decrease a sitemap's effectiveness. For example, if
a site has tens of thousands of pages, if done correctly, indexing changes on pages can take several hours to several
days. However, if done incorrectly, for example, if there are no timestamps on the sitemap, then it may take several
weeks to index the changes. This, therefore, slows down any promotion or optimization of your site.
This report displays critical errors in the HTML code and HTML DOM structure. HTML errors confuse search engines
and their crawling robots may incorrectly determine the content of the site, and overlook important parts of the
page because it can’t read your page due to HTML coding errors.
http://www.simplybasics.in/ 1
http://www.simplybasics.in/product-list/9 3
http://www.simplybasics.in/product-list/12 3
http://www.simplybasics.in/product-list/4 3
http://www.simplybasics.in/product-list/2 3
http://www.simplybasics.in/product-list/5 3
http://www.simplybasics.in/product-list/1 3
http://www.simplybasics.in/product-list/6 3
http://www.simplybasics.in/product-list/3 3
http://www.simplybasics.in/product-list/10 3
You should NEVER have multiple meta titles and descriptions on just ONE page - you should only have one meta title
and one meta description on each page. Having more than one meta title and description on a page confuses search
engine and you may be penalized.
Excessively large HTML-pages (more than > 3 Mb in raw HTML) are rarely indexed by search engines. Search engines
have strict limits regarding the amount of time they will afford their crawlers to spend per page. Our crawler will not
be able to crawl excessively large pages.
The DOCTYPE tells Web browsers what version of HTML your page is using. The DOCTYPE should always the the very
first line of your HTML code.
Though modern browsers disregard minor issues, serious errors such as incorrect DOCTYPE do affect the accessibility
of web content.
Images add value to your SEO efforts by increasing user engagement and increase buyers' attention, trust, and
conversion rate. Please note that your site may contain more pages than our crawler was able to find
(example: image as a CSS background).
Search engines use alt text and the contents of the page to understand the image's subject matter. Applying alt
attributes to your product images will have an impact on your SEO results.
Title text appears when a user hovers over an image. That being said, the image title is not used for search ranking.
Google extracts information about the subject matter of the image from the content of the page, including captions
and image titles. Wherever possible, make sure images are placed near relevant text and on pages that are relevant
to the image subject matter. (https://developers.google.com/search/docs/advanced/guidelines/google-images)
This report is likely to show pages with no images in the content area. Images add value to your SEO efforts by
increasing user engagement and increase buyers' attention, trust, and conversion rate. Please note that your site
may contain more pages than our crawler was able to find (example: image as a CSS background).
This report is likely to show pages with no images in the content area. Images add value to your SEO efforts by
increasing user engagement and increase buyers' attention, trust, and conversion rate. Please note that your site
may contain more pages than our crawler was able to find (example: image as a CSS background).
Level of speed optimization shows how fast the mobile version of the site can be loaded. Slow page loads on mobile
devices reduces user-friendliness of the site and can negatively affect search rankings as mobile page load speed is a
ranking factor on mobile.
Level of speed optimization shows how fast the desktop version of the site can be loaded. Slow page loads on
desktop reduces the user-experience of the site.
Average page load time (only html loaded), per second 1.39 sec. .
The mean average of all your pages loading times. As a note, reducing the load time of pages on the site increases
crawler indexing, improves conversions, and improves overall user-experience. The average page load time should
not exceed 2 seconds unless you have an image heavy page with lazy loaded images.
URL redirection (redirect) – is the automatic redirection of users from one page of the site to another. For example,
you may visit www.example.com and be sent to https://example.com - this would classify as a redirect. This report
contains all the redirects that were found while crawling your site. Redirections are usually intentional and would
only cause issues if you were sending users to a bad third-party website or to a 404 page.
URL redirection (redirect) – is the automatic redirection of users from one page of the site to another. Multiple
(chained) redirects refer to when a user is directed more than once after visiting a URL.
Google does not consider multiple redirects an issue unless they send the user to a bad third-party website.
However, multiple (chained) redirects can provide a poor user-experience if there are too many chained together.
rel="canonical" is a tag applied to pages that essentially says; “I’m the master copy of this page” to the search engine
crawlers when they crawl your site.
A canonicalized page is a page that is recommended for indexation in search engines by you, and carries the weight
of being ‘the’ authoritative page for that page’s specific text, on your site. For example, when a search engine
crawler is crawling your site and comes across the rel=”canonical” tag on a page, it tells the crawler to trust and
index this version of the page on your site.
The rel=”canonical” tag affords search engines the ability to quickly identify the “master copy” of a page that has
other pages with duplicate/similar content. This helps the search engine crawler know exactly what page should be
indexed and what pages shouldn’t.
Google officially recommends using the rel="canonical" tag to prevent duplicate URL’s. You can read about these
guidelines here: Duplicate URL consolidation.
In this report, you can see when one-page redirects to another, along with the response code. Whilst redirects can
be useful when used properly, they can also harm a site when used to redirect to doorway pages.
Doorway pages are an often-manipulative technique used to redirect a user to one page using cloaking, spamming,
etc. Google heavily penalize the use of these pages, as they are viewed as a way of attempting to unfairly manipulate
the system, which can lead to a bad user experience.
If you identify any occurrences of doorway pages you have two options:
1. Add value to each of those pages instead of sending the user to a new page.
2. Remove them completely and focus on informative, helpful landing pages.
A redirect automatically makes a browser and search engines go from one URL to another URL. It's not bad for SEO if
there is no doorway page ( ).
Having multiple instances of rel="canonical" on the same page will invalidate the rel="canonical" tag in its entirety
for that page. If multiple instances are found, Google will decide themselves what page should be the authoritative
one (this is not ideal)
Reference:
You can use the rel="canonical" link element across different domains to specify the exact URL of your preferred
domain for indexing purposes. Google supports cross-domain rel="canonical" tags:
However, most of the time cross-domain rel=canonical usage is accidental and therefore requires your attention to
remedy.
The page that the rel=canonical points to do not resolve and/or do not exist anymore. These will need to be
addressed.
This wastes crawling budget and can be a mistake. If Google spends too much time crawling URLs that aren't
appropriate for the index, Googlebot might decide that it's not worth the time to look at the rest of your site.
Missing http/https prefix in URL. Absolute URLs should specify the full path—including the scheme like http:// or
https://
The rel=canonical link tag should only appear in the <head>. When you encounter a rel=canonical designation in the
<body>, it's disregarded.
Critical SEO errors are serious search optimization errors that can significantly damage search rankings.
Other SEO Errors are less likely to negatively affect search rankings. However, a large number of these errors can
negatively impact search rankings.
Warnings 6 .
Warnings are issues that could require immediate attention or maybe innocuous dependent on your website. These
warnings may require the help of a specialist - you can contact our support team to get support straight away if you
believe any of these warnings are dangerous for your website.
Notices 0 .
Notices are things that our crawler has noticed during the crawling of your site. These are not errors, nor warnings,
but instead useful bits of information our crawler has noticed while crawling your site.
If the site is blocked from indexing, the search engines will not be able to crawl (read) the content of the web pages,
and the site will not be displayed in the search results.
Robots.txt
Entering meta name="robots" on the main page or all pages of the site;
With special software (for example, by blocking the user-agents of search engines);
By closing all links from the main page with the "nofollow" meta tag.
These pages have noindex in their html or are blocked from crawling and indexing via their robots.txt file. They could
also have a rel=canonical pointing to another URL.
If the page is blocked from indexing, then search engines will not be able to read (index) the content on the site.
Thus, they will not display the site in the search results.
Sometimes it is necessary to block some pages from indexing. This is especially true for duplicate pages. Check the
list of blocked pages to make sure that no page was blocked by mistake.
When a search engine crawlers visits your site, it will crawl all the pages it can find via internal links and index them
accordingly. This report shows pages that are blocked from indexing. These pages will not show up in the SERPs.
Pages with identical meta title tags confuse search engines and users. Using the same meta title tag on multiple
pages indicates to search engines that you have duplicate content on these pages. This is likely to decrease SERP
rankings.
Pages with identical meta description tags confuse search engines and users. Using the same meta description tag on
multiple pages indicates to search engines that you have duplicate content on these pages. This is likely to decrease
SERP rankings. If the search engines deem your description tag to be irrelevant to the content found on the page it is
likely to generate its own description which will not be tailored to the user and will decrease CTR.
TITLE and DESCRIPTION tags are imperative to CTR and correctly telling the user what your page is about. If search
engines find identical TITLE and DESCRIPTION tags then they will create their own Snippet from fragments of text on
your page that they deem most relevant to the page – this is not ideal and these tags will not be optimized for your
users.
Your meta description and header 1 title tags <h1> should be different from one another. The <h1> title should be
the title and headline of your page that the user sees when on your website, and the meta description should be a
short summary of what is on the page and is viewed on the SERPs.
Page ranking;
Snippet generation (snippet – is a description of the site in the search results).
The content of the TITLE affects the CTR of the snippet. And the high CTR increases the search rank of the site. The
title should be comprehensive and descriptive in order to attract customers to the site.
A short title tag does not give the search engines enough information about the page and the search engines can use
a another text fragment from the page for the snippet instead of your desired title.
This is likely to lead in decreased CTR and search engines demoting your page in the SERP rankings.
Missing h1 13 .
The header 1 tag <h1> is the most important header tag on your page and tells the search engines what your page is
about and is also likely to contain keywords you are trying to rank for. The <h1> tag is the most important heading of
the first level.
Search engines give substantial weight to <h1> headings with keywords. The <h1> of a page should be an
overarching topic of the page, and thus likely will also include keywords.
Page Errors
H1:
Page ranking;
Snippet generation (snippet – is a description of the site in the search results).
The content of the DESCRIPTION affects the CTR of the snippet. And the high CTR increases the search rank of the
site. The title should be comprehensive and descriptive in order to attract customers to the site.
A short meta description does not give the search engines enough information about the page and the search
engines can use a another text fragment from the page for the snippet instead of your desired title.
This is likely to lead in decreased CTR and search engines demoting your page in the SERP rankings.
Page Errors
Thin content pages are pages with a small amount of content (words in this case). Search engines consider such
pages as useless for users and will likely decrease SERP rankings for sites with a large number of these pages.
This report refers to URLs with 100% identical content to other pages on your site set for indexing. This is normally
an issue associated with incorrect placement of rel=‘canonical’ tag or forgetting it entirely. Multiple infractions of
100% duplicate pages are highly likely to incur duplicate page penalties.
Similar pages 0 .
This report refers to URLs with > 97% identical content when compared to other pages on your site set for indexing.
To avoid this error, you can do one of two things:
Add more content to these pages with > 97% identical content.
Add rel=‘canonical’ tag pointing to the original/authoritative source of this information on your site.
Rewrite duplicate sections of text on pages with > 97% identical content.
Keyword stuffing (also known as webspam or spamdexing) is a technique in which keywords are loaded into a web
page's meta tags, visible content, or backlink anchor text repeatedly in order to increase search rankings for those
keywords. Keyword stuffing may lead to a website being banned or penalized on all major search engines. If a
specific page is overstuffed with certain keywords, search engines may change your landing page to a less relevant
page or de-index that page. Multiple infractions are likely to lead to Google permanently de-indexing your entire
site, not just one page. For some sites (~ 3% of cases), a high keyword density is not a problem - this is especially true
for a product catalog or price lists that are thin on content and will repeat the product name and features frequently.
We also check the density of keywords on competitor sites in the top 10 for the keywords you wish to rank for. If
your competitors have a higher keyword density than your pages do, you can use more keywords per page. When
you are unsure whether your pages are overstuffed with keywords, always check the content on your competitor's
websites.
Your site can be penalized for stealing and using someone else’s content.
If there is text on your site that is identical to your competitors’ content (>= 30%), search engines may decrease your
rankings. Search engines on occasion can incorrectly identify the authorship of text and penalize your site even if
someone copied content from your webpage originally.
Copying is only allowed for online stores using manufacturers descriptions whereby it is hard to make a unique
description for each item (although you should always try for optimal SEO).
Online stores with the original description have advantages in SERP rankings.
Our plagiarism checker is best at detecting blatant plagiarism, which can seriously affect the rankings of your site.
We always recommend checking any suspicious pages manually, that are flagged up by our algorithm.
If your content is deemed unsafe for an audience under 18, you may find that the site doesn’t appear when people
search with SafeSearch on (it’s on by default). Adult content and sites containing obscene language are completely
excluded from SafeSearch results. This report (although it does not catch 100% instances of adult content) is also
useful for seeing if anyone has infected your site with adult content to decrease your search rankings.
Officially, Google takes no position on curse words in website content. However, if Google crawls your site and finds
swear words it may filter your site out of the results when SafeSearch is enabled (it’s on by default). It is important
to note that if you have pages with swear words found, that they may in their used context not be swear words and
related to something entirely different altogether. Context is key here and you should read the report and check the
links individually. On E-commerce site specifically, it is bad for search rankings if they find swear words in the
feedback section of different products. This would indicate that your site is selling bad products that produce a
negative user experience.
A short meta title (<2 words) does not give the user or search engines enough information about the page. This is
likely to lead in decreased CTR and search engines demoting your page in the SERP rankings. This report also includes
pages with 0 words in the meta title.
A short meta description (<5 words) does not give the user or search engines enough information about the contents
of the page. This report also includes pages with 0 words in the meta description.
Page ranking;
Snippet generation (snippet – is a description of pages shown in the search results).
A short meta description does not allow search engines to generate relevant snippets and so they fill in the missing
information from the main text of the page. This is likely to lead in decreased CTR and search engines demoting your
page in the SERP rankings.
Page Errors
A long meta description is not included in the snippet and will be truncated. Search engines will truncate longer
descriptions than 240 to 280 characters (dependent on device). In this case, the keywords are not included in the
snippet, and the search engines can use a random text fragment from the page for the snippet instead of your
desired meta description. You should always write a relevant description with between 240 and 280 characters. If
you do so, then Google is more likely to use this tag when generating a snippet.
External links take users away from your site. You should carefully check this report to make sure your site does not
have any malware redirecting users to malicious sites.
External links
https://x.com/simply82246
https://www.instagram.com/simplybasics_/?hl=en
https://www.pinterest.com/basicssimply662/
https://www.youtube.com/@SimplyBasics-u6s
https://www.linkedin.com/in/simply-basics-9b6447331/
https://www.eventneedz.com/
No page should ever have more than 200 internal outbound links unless you are a directory or an e-commerce seller.
This is seen as a sign of a spammy site and you will be penalized on these pages.
These are pages with no internal outbound links. This is normal for .pdf files, but normal URLs should always have
internal links to follow. If this is a print version, cart, and so on, the page needs to be blocked for indexing.
Your internal linking structure should always be pointing (internal outbound linking) at least 5 internal links to pages
you wish to rank on Google. Pages receiving less than 5 inbound internal links are not likely to be seen as
authoritative by Google in your site’s hierarchy. The more links to a page, the higher it's rank.
Video 33%
.
In stock 31%
.
Calculator 19%
.
Live chat 6%
.
Price 2%
.
1 www.amazon.in 15 4
2 phool.co 14 3
3 www.soulflower.in 14 2
4 www.forestessentialsindia.com 13 3
5 nirmalaya.com 10 3
Keywords in Top5 0
Keywords in Top10 0
Keywords in Top20 0
Keywords in Top30 0
Keywords in Top50 0