SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat"). The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[50] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[51]
Search engines use complex mathematical algorithms to interpret which websites a user seeks. In this diagram, if each bubble represents a website, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through", such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: Percentages are rounded.
Smartphone - In this document, "mobile" or “mobile devices" refers to smartphones, such as devices running Android, iPhone, or Windows Phone. Mobile browsers are similar to desktop browsers in that they can render a broad set of the HTML5 specification, although their screen size is smaller and in almost all cases their default orientation is vertical.
Social media can be used not only as public relations and direct marketing tools but also as communication channels targeting very specific audiences with social media influencers and social media personalities and as effective customer engagement tools.[15] Technologies predating social media, such as broadcast TV and newspapers can also provide advertisers with a fairly targeted audience, given that an ad placed during a sports game broadcast or in the sports section of a newspaper is likely to be read by sports fans. However, social media websites can target niche markets even more precisely. Using digital tools such as Google Adsense, advertisers can target their ads to very specific demographics, such as people who are interested in social entrepreneurship, political activism associated with a particular political party, or video gaming. Google Adsense does this by looking for keywords in social media user's online posts and comments. It would be hard for a TV station or paper-based newspaper to provide ads that are this targeted (though not impossible, as can be seen with "special issue" sections on niche issues, which newspapers can use to sell targeted ads).
On April 24, 2012 many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank. The Google Update was called Penguin. Since then, there have been several different Penguin/Panda updates rolled out by Google. SEM has, however, nothing to do with link buying and focuses on organic SEO and PPC management. As of October 20, 2014 Google has released three official revisions of their Penguin Update.

When Googlebot crawls a page, it should see the page the same way an average user does15. For optimal rendering and indexing, always allow Googlebot access to the JavaScript, CSS, and image files used by your website. If your site's robots.txt file disallows crawling of these assets, it directly harms how well our algorithms render and index your content. This can result in suboptimal rankings.
Inclusion in Google's search results is free and easy; you don't even need to submit your site to Google. Google is a fully automated search engine that uses web crawlers to explore the web constantly, looking for sites to add to our index. In fact, the vast majority of sites listed in our results aren't manually submitted for inclusion, but found and added automatically when we crawl the web. Learn how Google discovers, crawls, and serves web pages.3
When would this be useful? If your site has a blog with public commenting turned on, links within those comments could pass your reputation to pages that you may not be comfortable vouching for. Blog comment areas on pages are highly susceptible to comment spam. Nofollowing these user-added links ensures that you're not giving your page's hard-earned reputation to a spammy site.
On April 24, 2012 many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank. The Google Update was called Penguin. Since then, there have been several different Penguin/Panda updates rolled out by Google. SEM has, however, nothing to do with link buying and focuses on organic SEO and PPC management. As of October 20, 2014 Google has released three official revisions of their Penguin Update.
YouTube is another popular avenue; advertisements are done in a way to suit the target audience. The type of language used in the commercials and the ideas used to promote the product reflect the audience's style and taste. Also, the ads on this platform are usually in sync with the content of the video requested, this is another advantage YouTube brings for advertisers. Certain ads are presented with certain videos since the content is relevant. Promotional opportunities such as sponsoring a video is also possible on YouTube, "for example, a user who searches for a YouTube video on dog training may be presented with a sponsored video from a dog toy company in results along with other videos."[61] YouTube also enable publishers to earn money through its YouTube Partner Program. Companies can pay YouTube for a special "channel" which promotes the companies products or services.
Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[18][19][52] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[53] although the two are not identical.

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[68][69]


Creating high quality content takes a significant amount of at least one of the following: time, effort, expertise, and talent/skill. Content should be factually accurate, clearly written, and comprehensive. So, for example, if you describe your page as a recipe, provide a complete recipe that is easy to follow, rather than just a set of ingredients or a basic description of the dish.

Social media itself is a catch-all term for sites that may provide radically different social actions. For instance, Twitter is a social site designed to let people share short messages or “updates” with others. Facebook, in contrast is a full-blown social networking site that allows for sharing updates, photos, joining events and a variety of other activities.
In addition to giving you insight into the search volume and competition level of keywords, most keyword research tools will also give you detailed information about the average or current estimated CPC for particular keywords are. This is particularly important for businesses with smaller ad budgets and this feature allows you to predict whether certain keywords will be truly beneficial to your ad campaigns or if they’ll cost too much.

In the social sphere, things change fast. New networks emerge, while others go through significant demographic shifts. Your business will go through periods of change as well. All of this means that your social media strategy should be a living document that you look at regularly and adjust as needed. Refer to it often to keep you on track, but don’t be afraid to make changes so that it better reflects new goals, tools, or plans.

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.

Small businesses also use social networking sites as a promotional technique. Businesses can follow individuals social networking site uses in the local area and advertise specials and deals. These can be exclusive and in the form of "get a free drink with a copy of this tweet". This type of message encourages other locals to follow the business on the sites in order to obtain the promotional deal. In the process, the business is getting seen and promoting itself (brand visibility).
Link text is the visible text inside a link. This text tells users and Google something about the page you're linking to. Links on your page may be internal—pointing to other pages on your site—or external—leading to content on other sites. In either of these cases, the better your anchor text is, the easier it is for users to navigate and for Google to understand what the page you're linking to is about.
Search engines reward you when sites link to yours – they assume that your site must be valuable and you’ll rank higher in search results. And the higher the “rank” of the sites that link to you, the more they count in your own ranking. You want links from popular industry authorities, recognized directories, and reputable companies and organizations.
Link text is the visible text inside a link. This text tells users and Google something about the page you're linking to. Links on your page may be internal—pointing to other pages on your site—or external—leading to content on other sites. In either of these cases, the better your anchor text is, the easier it is for users to navigate and for Google to understand what the page you're linking to is about.
Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn't feasible. In this case, you could automatically generate description meta tags based on each page's content.

Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.
×