Showing posts with label Seo Regulation. Show all posts
Showing posts with label Seo Regulation. Show all posts

Wednesday, June 2, 2010

Simple 7 things to remove bad press from the search results

Six things you can do to remove bad press from the search results

No matter how good your company is, some people will always write something negative about your site, even if you tried your best to help them.

Reputation first customers might write negative comments about your company in their blogs or some of your competitors might like to damage your reputation by creating fake comments about your site.

What can you do if web pages with negative comments appear on Google's first result page for your company name?

  1. Fix the problem. If people write negative reviews about your company, the first thing that you should do is to fix the problem that caused the negative review.
  2. It doesn't hurt to ask.Send the webmaster of the web page with the negative review a polite email and ask for removal of the negative comments. Many webmasters will cooperate if you explain the issue.
  3. Offer Extra Help. You could always try to contact the person is speaking bad about your service and try to resolve the Issue.
  4. Give web pages with positive comments a boost.If the webmaster does not want to remove the negative review, find websites that contain positive comments about your site.
    link to these pages from your own website to increase the link popularity of these pages. The more links the pages with the positive reviews have, the higher they will be ranking in the search results.
  5. Ask for testimonials from happy customers.If you receive positive feedback from customers, ask them to write a review on ConsumerReview.com, Epinions.com or similar sites.
  6. Add your website to company wiki pages.Websites like AboutUs.org allow you to create an article about your company. If your company is important enough, you might even create an entry in Wikipedia.these Wiki pages will also appear in the search results when someone searches for your company name.
  7. Make sure that your own website tops the search results.
    If your own website comes first for your company name then most people will click on your link and don't look further.

Removing negative comments from the search engine results can take some time. It's best to avoid negative experiences at the outset by providing high quality products and good customer support.

Saturday, March 27, 2010

Matt Cutts Guidelines Interview

Here are guidelines for SEO from the Search Engine Guru Matt Cutts:
  1. The more relevant links you have, the more pages of your site will be indexed
    Matt Cutts said that the number of pages that Google indexes from your website is roughly proportional to the PageRank of your website. That means that more pages of your website will be indexed if your website has many inbound links.

    Google does not have an indexation cap, i.e. they will index all pages of your website if you have enough inbound links. Remember that the PageRank that Google uses in its ranking algorithm is not the PageRank that is displayed in Google's toolbar.
  2. Slow servers can cause problems
    If Google can only crawl two pages at any given time due to a slow server, Google can set some sort of upper bound on how many pages they will fetch from that host server. This can be a problem for websites that are hosted on shared or slow servers.
  3. Duplicate content can cause problems
    "Imagine we crawl three pages from a site, and then we discover that the two other pages were duplicates of the third page. We'll drop two out of the three pages and keep only one, and that?s why it looks like it has less good content."

    As mentioned above, Google will index your web pages based on the PageRank of your pages. If you have duplicate content, some pages of your website will be discarded and you'll waste ranking opportunities.

    "It's totally fine for a page to link to itself with rel=canonical, and it's also totally fine, at least with Google, to have rel=canonical on every page on your site."

    However, Google does not always obey the canonical tag:

    "The crawling and indexing team wants to reserve the ultimate right to determine if the site owner is accidentally shooting themselves in the foot and not listen to the rel=canonical tag."
  4. Affiliate pages don't get high rankings
    If a website is an affiliate website that is very similar to other pages (only with a different logo, etc.) then this page won't get high rankings.

    If Google detects an affiliate link than this link won't pass any PageRank power.
  5. Redirects work but they don't pass the whole PageRank
    If you change your domain name and redirect old pages with a 301 redirect from your old page to your new page then the link power will be passed to your new domain name but the overall power of the links will decrease. 301 redirects do not pass the full PageRank.
  6. Low quality pages can cause problems
    "If there are a large number of pages that we consider low value, then we might not crawl quite as many pages from that site, but that is independent of rel=canonical."

    If you have a lot of web pages with thin content then Google might stop crawling your website. Matt Cutts also suggested that it might help to be wordy:

    "You really want to have most of your pages have actual products with lots of text on them."
  7. PageRank sculpting and website navigation
    Google does not want you to sculpt your website for PageRank reasons. The best way to pass link power from one page to other pages is to have a good website navigation.

    "Site architecture, how you make links and structure appear on a page in a way to get the most people to the products that you want them to see, is really a better way to approach it then trying to do individual sculpting of PageRank on links."

    "You can distribute that PageRank very carefully between related products, and use related links straight to your product pages rather than into your navigation. I think there are ways to do that without necessarily going towards trying to sculpt PageRank."
  8. You still shouldn't use JavaScript links for your website navigation
    "For a while, we were scanning within JavaScript, and we were looking for links. Google has gotten smarter about JavaScript and can execute some JavaScript.

    I wouldn't say that we execute all JavaScript, so there are some conditions in which we don?t execute JavaScript.

    We do have the ability to execute a large fraction of JavaScript when we need or want to. One thing to bear in mind if you are advertising via JavaScript is that you can use NoFollow on JavaScript links."
  9. Google does not like paid links
    Matt Cutts said they Google doesn't want advertisements to affect search engine rankings.

    They might put out a call for people to report more about link spam in the coming months. Matt Cutts said that Google "does a lot of stuff" to try to detect ads and make sure that they don't unduly affect search engines.


Original Interview http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml

Wednesday, December 16, 2009

Duplicate or Not to Duplicate. What is duplicate content?

What is duplicate content?

The patent contains a definition of duplicate content:

"Duplicate documents are documents that have substantially identical content, and in some embodiments wholly identical content, but different document addresses."

The patent describes three scenarios in which duplicate documents are encountered by a web crawler:

1. Two pages, comprising any combination of regular web page(s) and temporary redirect page(s), are duplicate documents if they share the same page content, but have different URLs.

2. Two temporary redirect pages are duplicate documents if they share the same target URL, but have different source URLs.

3. A regular web page and a temporary redirect page are duplicate documents if the URL of the regular web page is the target URL of the temporary redirect page or the content of the regular web page is the same as that of the temporary redirect page.

A permanent redirect page is not directly involved in duplicate document detection because the crawlers are configured not to download the content of the redirecting page.

How does Google detect duplicate content?

According to the patent description, Google's web crawler consults the duplicate content server to check if a found page is a copy of another document. The algorithm then determines which version is the most important version.

Google can use different methods to detect duplicate content. For example, Google might take "content fingerprints" and compare them when a new web page is found.

Interestingly, it's not always the page with the highest PageRank that is chosen as the most important URL for the content:

"In some embodiments, a canonical page of an equivalence class is not necessarily the document that has the highest score (e.g., the highest page rank or other query-independent metric)."

How does this affect your website?

If you want to get high rankings, it is easier to do so with unique content. Try to use as much original content as possible on your web pages.

If your website must use the same content as another website, make sure that your website has better inbound links than the other websites that carry the same content. It's likely that your website will be chosen as the most important URL for the content then.

If your web site has unique content, you don't have to worry about potential duplicate content penalties. Optimize that content for search engines and make sure that your web site has good inbound links.

Monday, July 13, 2009

The Time Has Come To Regulate Search Engine Marketing And SEO

The following post was written by a well known executive at one of the largest sites on the Internet. The author has requested to remain anonymous - not for dramatic effect, but because of the backlash he would receive from the SEO industry and possibly Google itself. He also doesn’t want his company associated with the post.

He is starting a discussion on the need for government regulation of the organic and paid search policies of Google, which maintains a commanding lead in search market share today. Or at least transparency in how search results are determined. There is clearly growing frustration on the constantly changing “border policies” that are created and enforced by Google and other search engines. It is a fascinating read..... Read The Full