Responsive Design

Responsive web design

From Wikipedia, the free encyclopedia

Responsive web design (RWD) is an approach to web design aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones).[1][2][3]

A site designed with RWD[1][4] adapts the layout to the viewing environment by using fluid, proportion-based grids,[5][6] flexible images,[7][8][9] and CSS3 media queries,[3][10][11] an extension of the @media rule, in the following ways:[12]

  • The fluid grid concept calls for page element sizing to be in relative units like percentages, rather than absolute units like pixels or points.[6]
  • Flexible images are also sized in relative units, so as to prevent them from displaying outside their containing element.[7]
  • Media queries allow the page to use different CSS style rules based on characteristics of the device the site is being displayed on, most commonly the width of the browser.

Related concepts

Mobile first, unobtrusive JavaScript, and progressive enhancement

“Mobile first”, unobtrusive JavaScript, and progressive enhancement are related concepts that predate RWD. Browsers of basic mobile phones do not understand JavaScript or media queries, so a recommended practice is to create a basic web site and enhance it for smart phones and PCs, rather than rely on graceful degradation to make a complex, image-heavy site work on mobile phones.[13][14][15][16]

Progressive enhancement based on browser-, device-, or feature-detection

Where a web site must support basic mobile devices that lack JavaScript, browser (“user agent”) detection (also called “browser sniffing“), and mobile device detection[14][17] are two ways of deducing if certain HTML and CSS features are supported (as a basis for progressive enhancement)—however, these methods are not completely reliable unless used in conjunction with a device capabilities database.

For more capable mobile phones and PCs, JavaScript frameworks like Modernizr, jQuery, and jQuery Mobile that can directly test browser support for HTML/CSS features (or identify the device or user agent) are popular.Polyfills can be used to add support for features—e.g. to support media queries (required for RWD), and enhance HTML5 support, on Internet Explorer. Feature detection also might not be completely reliable: some may report that a feature is available, when it is either missing or so poorly implemented that it is effectively nonfunctional.[18][19]

Challenges, and other approaches

Luke Wroblewski has summarized some of the RWD and mobile design challenges, and created a catalog of multi-device layout patterns.[20][21][22] He suggests that, compared with a simple RWD approach, device experience or RESS (responsive web design with server-side components) approaches can provide a user experience that is better optimized for mobile devices.[23][24][25] Server-side “dynamic CSS” implementation of stylesheet languages like Sass or Incentivated’s MML can be part of such an approach by accessing a server based API which handles the device (typically mobile handset) differences in conjunction with a device capabilities database in order to improve usability.[26] RESS is more expensive to develop, requiring more than just client-side logic, and so tends to be reserved for organizations with larger budgets. Google recommends responsive design for smartphone websites over other approaches.[27]

Although many publishers are starting to implement responsive designs, one ongoing challenge for RWD is that some banner advertisements and videos are not fluid.[28] However, search advertising and (banner) display advertising support specific device platform targeting and different advertisement size formats for desktop, smartphone, and basic mobile devices. Different landing page URLs can be used for different platforms,[29] orAjax can be used to display different advertisement variants on a page.[17][21][30] CSS tables permit hybrid fixed+fluid layouts.[31]

There are now many ways of validating and testing RWD designs,[32] ranging from mobile site validators and mobile emulators[33] to simultaneous testing tools like Adobe Edge Inspect.[34] The Firefox browser and the Chrome console offer responsive design viewport resizing tools, as do third parties.[35][36]

History

A site layout example that adapts to browser viewport width was first demonstrated by Cameron Adams in 2004.[37] By 2008, a number of related terms such as “flexible”, “liquid”,[38] “fluid”, and “elastic” were being used to describe layouts. CSS3 media queries were almost ready for prime time in late 2008/early 2009.[39] Ethan Marcotte coined the term responsive web design [40] (RWD)—and defined it to mean fluid grid/ flexible images/ media queries—in a May 2010 article in A List Apart.[1] He described the theory and practice of responsive web design in his brief 2011 book titled Responsive Web Design. Responsive design was listed as #2 in Top Web Design Trends for 2012 by .net magazine[41] after progressive enhancement at #1.

Mashable called 2013 the Year of Responsive Web Design.[42] Many other sources have recommended responsive design as a cost-effective alternative to mobile applications.

Forbes featured a piece, ‘Why You Need To Prioritize Responsive Design Now’ [43] where the importance was made clear that having a mobile version of your website isn’t enough anymore. Jody Resnick, President ofTrighton Interactive stated in his interview with Forbes, “Responsive websites simplify internet marketing and SEO. Instead of having to develop and manage content for multiple websites, businesses with responsive sites can take a unified approach to content management because they have only the one responsive site to manage.

Resnick predicts, “As the internet transforms further into a platform of services and user interfaces that tie those services together, leveraging this technology in the future will allow companies to integrate a plethora of back-end services, such as Facebook, Twitter, Salesforce.com, and Amazon Web Services, and then present the integrated data back out the front-end iad layer on a responsive design so the application looks great on all devices without custom coding needed for each device or screen size.”

Some believe that responsive design will be more prevalent than native apps simply because of the browser compatibility and the cost associated with programming the apps.

See also

References

  1. ^ Jump up to:a b c Marcotte, Ethan (May 25, 2010). “Responsive Web design”. A List Apart.
  2. Jump up^ “Ethan Marcotte’s 20 favourite responsive sites”. .net magazine. October 11, 2011.
  3. ^ Jump up to:a b Gillenwater, Zoe Mickley (Dec 15, 2010). “Examples of flexible layouts with CSS3 media queries”. Stunning CSS3. p. 320.ISBN 978-0-321-722133.
  4. Jump up^ Pettit, Nick (Aug 8, 2012). “Beginner’s Guide to Responsive Web Design”. TeamTreehouse.com blog.
  5. Jump up^ “Core concepts of Responsive Web design”. Sep 8, 2014.
  6. ^ Jump up to:a b Marcotte, Ethan (March 3, 2009). “Fluid Grids”. A List Apart.
  7. ^ Jump up to:a b Marcotte, Ethan (June 7, 2011). “Fluid images”. A List Apart.
  8. Jump up^ Hannemann, Anselm (Sep 7, 2012). “The road to responsive images”. net Magazine.
  9. Jump up^ Jacobs, Denise (April 24, 2012). “50 fantastic tools for responsive web design”. .net Magazine.
  10. Jump up^ Gillenwater, Zoe Mickley (Oct 21, 2011). “Crafting quality media queries”.
  11. Jump up^ “Responsive design—harnessing the power of media queries”. Google Webmaster Central. Apr 30, 2012.
  12. Jump up^ W3C @media rule
  13. Jump up^ Wroblewski, Luke (November 3, 2009). “Mobile First”.
  14. ^ Jump up to:a b Firtman, Maximiliano (July 30, 2010). Programming the Mobile Web. p. 512. ISBN 978-0-596-80778-8.
  15. Jump up^ “Graceful degradation versus progressive enhancement”. February 3, 2009.
  16. Jump up^ Designing with Progressive Enhancement. March 1, 2010. p. 456. ISBN 978-0-321-65888-3.
  17. ^ Jump up to:a b “Server-Side Device Detection: History, Benefits And How-To”. Smashing magazine. September 24, 2012.
  18. Jump up^ “BlackBerry Torch: The HTML5 Developer Scorecard | Blog”. Sencha. 2010-08-18. Retrieved 2012-09-11.
  19. Jump up^ “Motorola Xoom: The HTML5 Developer Scorecard | Blog”. Sencha. 2011-02-24. Retrieved 2012-09-11.
  20. Jump up^ Wroblewski, Luke (May 17, 2011). “Mobilism: jQuery Mobile”.
  21. ^ Jump up to:a b Wroblewski, Luke (February 6, 2012). “Rolling Up Our Responsive Sleeves”.
  22. Jump up^ Wroblewski, Luke (March 14, 2012). “Multi-Device Layout Patterns”.
  23. Jump up^ Wroblewski, Luke (February 29, 2012). “Responsive Design … or RESS”.
  24. Jump up^ Wroblewski, Luke (September 12, 2011). “RESS: Responsive Design + Server Side Components”.
  25. Jump up^ Andersen, Anders (May 9, 2012). “Getting Started with RESS”.
  26. Jump up^ “Responsive but not completely mobile optimised | Blog”. Incentivated.
  27. Jump up^ “Building Smartphone-Optimized Websites”. Google.
  28. Jump up^ Snyder, Matthew; Koren, Etai (Apr 30, 2012). “The state of responsive advertising: the publishers’ perspective”. .net Magazine.
  29. Jump up^ Google AdWords Targeting (Device Platform Targeting)
  30. Jump up^ JavaScript and Responsive Web DesignGoogle Developers
  31. Jump up^ Table Layouts in RWD
  32. Jump up^ Young, James (Aug 13, 2012). “Top responsive web design problems… testing”. .net Magazine.
  33. Jump up^ “Best mobile emulators and RWD testing tools”. The Mobile Web Design Blog. Nov 26, 2011.
  34. Jump up^ Rinaldi, Brian (September 26, 2012). “Browser testing… with Adobe Edge Inspect”.
  35. Jump up^ Responsive Design View in Firefox
  36. Jump up^ Viewport resizer
  37. Jump up^ Adams, Cameron (September 21, 2004). “Resolution dependent layout: Varying layout according to browser width”. The Man in Blue.
  38. Jump up^ CSS2 Liquid layout discussion
  39. Jump up^ CSS3 Media Queries Candidate Recommendation
  40. Jump up^ http://outseller.net/2015s-professional-responsive-web-design-offer-businesses/
  41. Jump up^ “15 top web design and development trends for 2012”. .net magazine. January 9, 2012.
  42. Jump up^ Cashmore, Pete (Dec 11, 2012). “Why 2013 Is the Year of Responsive Web Design”.
  43. Jump up^ Gunelius, Susan (March 13, 2013). “Why You Need To Prioritize Responsive Design Now”.

Small Business SEO

“Search marketing has grown in popularity as online search continues to evolve from a novelty to a standard feature in our everyday lives. Almost every business in the country, big or small and regardless of industry, has some kind of web presence, and everybody is competing for only a handful of positions at the top of search-engine results pages (SERPs).

Since larger companies — mega-corporations such as Walmart or Home Depot — already have millions of inbound links, decades of content, and a recurring base of online visitors, it’s no wonder why they generally appear in the top ranking positions when people search for commercial products. Regardless of what industry you’re in, you’ll always have at least one competitor who has been around longer and has tried harder than you have (allocated more budget and resources) to building their visibility on the web and in search engines.

So how can you, a small business with limited experience and resources, compete with that level of online domination?

Thankfully, search-engine optimization (SEO) is no longer about sheer volume. It’s not about who’s been on the web the longest, who has the most inbound links, or even who has the biggest library of great content. It’s about which page or website is the most relevant for the searcher. Knowing that, there are several strategies you can implement that can give you the edge over the bigger, badder competition.

1. Specialize in a niche.

One of the best things you can do as a small business is give yourself a niche focus. Instinctively, you might think that the better option for search visibility is to cover as many areas of expertise as possible. For example, if you work in heating, cooling, plumbing, roofing, construction and a dozen other home improvement topics, you’ll be able to appear in search engines for queries related to any of those keywords.

However, if you’re trying to take down your biggest competitors, it’s better to take more of a niche focus. Having several areas of specialization gives you relevance for a wide range of keywords, but your relevance for each of them is somewhat low. If you pour all your effort into one or a small handful of keywords, you’ll be able to achieve a much higher visibility.

For example, if you specialize in indoor plumbing, you might miss out on limited visibility for all those other home improvement keywords, but you’ll be the best in indoor plumbing.

2. Engage in a long-tail keyword strategy.

Long-tail keyword strategies try to accomplish a similar goal. In niche specialization, you sacrifice minimal relevance in a large volume of topics for maximum relevance in a much smaller volume of topics. With long-tail keywords, you’ll be sacrificing minimal ranking potential with highly popular keywords for maximum ranking potential with less popular keywords.

Long-tail keywords are extended phrases Google looks for, such as “tips for installing a toilet in an upstairs bathroom” instead of the much shorter, more popular “toilet installation.” Ranking highly for long-tail keywords is much easier than ranking high for shorter keywords, so even though they bring in less traffic, they’re still more valuable for small businesses to go after.

Fortunately, optimizing for long-tail keywords is easy. You can research ideal long-tail keywords to go after using Webmaster Tools, or you can just publish lots of great content — long-tail keyword phrases tend to appear naturally in the course of your writing. For further information on identifying and using long-tail keywords, see “The Rise of the Long-Tail Keyword for SEO” and “How to Find Long-Tail Keywords Once You’ve Identified Your Primary Keywords.”

3. Leverage locality for optimization.

Another way to beat the competition is by targeting a much more local audience. Local search is becoming more relevant and more important, so in today’s context, being the best barber shop in Houston is far better than being an OK barber shop on a national scale.

Even if your business does operate on a national (or international) level, you can still capture a niche market share and edge out your competition in at least one key area by optimizing for a specific local area. In this section, I’ll introduce a handful of specific strategies you can use to build your reputation and relevance in your given city.

Event attendance and community building. Get your name out there by getting involved in the community. Attend major events whenever you can, such as fairs, festivals or community gatherings. This will give you two opportunities: First, you’ll immediately generate more business simply by being at the event and offering discounts or promotions to event attendees. Second, and more importantly for SEO, you’ll have the opportunity to brag about your attendance online.

Post excellent content on your website, using local-specific keywords, about your company’s attendance, and syndicate a press release about the opportunity for some high-authority and local-specific inbound links. This is one of the easiest ways to generate publicity and build some local-optimized content simultaneously.

Local reviews, on directory and aggregation sites such as Yelp or TripAdvisor, have become essential for local SEO. With Google’s Pigeon algorithm update earlier this year, Yelp and similar sites received a huge boost in priority. Now, sites with large volumes of positive reviews rank higher than similar sites with few or negative reviews. In fact, Yelp’s importance has increased so much that, in some cases, Yelp profiles are actually ranking higher than the official pages of the companies they represent.

What this means for small businesses is a new, key opportunity to jump in the rankings without worrying about producing content or building links. Instead, you can focus on cultivating strong, positive reviews from your customers. While Yelp explicitly forbids compensating your reviewers, or asking customers directly for reviews in any way, you can still encourage more reviews with Yelp stickers and occasional call-outs with a link on your social-media profiles.

Hyper-local content. Local search is getting more local, and taking advantage of that incoming trend could be the opportunity you need to crush a larger competitor — especially if that competitor operates in the same city as you.

Google is getting better at identifying and categorizing neighborhoods within a broader city, so you can take local search a step further by using neighborhood-specific keywords instead of just city and state names. Your potential success is determined by how Google views your neighborhood boundaries, so do some research before you begin.

4. Personalize your social engagement.

Aside from local search optimization, you can also increase your chances of overcoming steep competition by stepping up the “personal” factor in your brand strategy. Large businesses tend to lose a portion of their personalities once they hit a certain point in their growth, but being small and nimble gives you the advantage of giving each follower a more personal, humanized experience.

Nurture your following on social media, and you’ll attract more posts and followers, and the bigger and more active your social-media presence is, the higher you’ll rank in Google.

5. Become a recognized, authoritative content publisher.

Building brand awareness, loyalty, trust and credibility requires frequent and quality content publication. Most companies utilize an on-site blog to publish content, while others produce and distribute ebooks, webinars, podcasts, videos and other forms of content through various other channels.

The keys to building your brand through a content strategy are quality and consistency. Maximize the reach of each piece of content you publish to maximize your return on investment, and be consistent with your publication schedule so you start to become recognized as a dependable authority.

Conclusion

There’s no shortcut to rise to the top of the search engine rankings, especially when there’s a massive competitor lingering on the scene. But with a strategy that leverages your geographic location and your agility, you can selectively overcome your competitors in specific key areas.

Give yourself the best odds by narrowing your topic and keyword focus and increasing your location-specific relevance. You might not rank for as many keywords as the bigger players, but you will be able to surpass them in relevance for your chosen focal points.” 

JAYSON DEMERS – Founder and CEO, AudienceBloom

SEO

Search engine optimization

From Wikipedia, the free encyclopedia

Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a search engine‘s unpaid results – often referred to as “natural,” “organic,” or “earned” results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine’s users. SEO may target different kinds of search, including image search, local search, video search, academic search,[1] news search and industry-specific vertical search engines.

As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience. Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

The plural of the abbreviation SEO can also refer to “search engine optimizers,” those who provide SEO services.

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a “spider” to “crawl” that page, extract links to other pages from it, and return information found on the page to be indexed.[2] The process involves a search engine spider downloading a page and storing it on the search engine’s own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase “search engine optimization” probably came into use in 1997. Sullivan credits Bruce Clay as being one of the first people to popularize the term.[3] On May 2, 2007,[4] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[5] that SEO is a “process” involving manipulation of keywords, and not a “marketing service.” The reviewing attorney basically bought his incoherent argument that while “SEO” can’t be trademarked when it refers to a generic process of manipulated keywords, it can be a service mark for providing “marketing services…in the field of computers.”

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page’s content. Using meta data to index pages was found to be less than reliable, however, because the webmaster’s choice of keywords in the meta tag could potentially be an inaccurate representation of the site’s actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[6] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[7]

By relying so much on factors such as keyword density which were exclusively within a webmaster’s control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed “Backrub,” a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[8]PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998.[9] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[10] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[11]

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times’ Saul Hansell stated Google ranks sites using more than 200 different signals.[12] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions.[13] Patents related to search engines can provide information to better understand search engines.[14]

In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[15] In 2008, Bruce Clay said that “ranking is dead” because ofpersonalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[16]

In 2007, Google announced a campaign against paid links that transfer PageRank.[17] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting.[18] As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.[19]

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[20]

On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, “Caffeine provides 50 percent fresher results for web searches than our last index…”[21]

Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[22]

In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.[23]

In April 2012, Google launched the Google Penguin update the goal of which was to penalize websites that used manipulative techniques to improve their rankings on the search engine.[24]

In September 2013, Google released the Google Hummingbird update, an algorithm change designed to improve Google’s natural language processing and semantic understanding of web pages.

Relationship with search engines

By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[25]

In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web was created to bring together practitioners and researchers concerned with search engine optimisation and related topics.[26]

Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[27] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[28] Google’s Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[29]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. Major search engines provide information and guidelines to help with site optimization.[30][31] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[32] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the crawl rate, and track the web pages index status.

Methods

Getting indexed

Search engines use complex mathematical algorithms to guess which websites a user seeks. In this diagram, if each bubble represents a web site, programs sometimes calledspiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links “carry through,” such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: percentages are rounded.

The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Two major directories, the Yahoo Directory and DMOZ both require manual submission and human editorial review.[33] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links.[34] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[35] this was discontinued in 2009.[36]

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[37]

Preventing crawling

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine’s database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[38]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[39] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[39] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page’s meta data, including the title tag and meta description, will tend to improve the relevancy of a site’s search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element[40] or via 301 redirects can help make sure links to different versions of the url all count towards the page’s link popularity score.

White hat versus black hat techniques

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[41] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[42]

An SEO technique is considered white hat if it conforms to the search engines’ guidelines and involves no deception. As the search engine guidelines[30][31][43] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[44] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalised however do not act in producing the best content for users, rather entirely focused on improving search engine rankings.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines’ algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[45] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google’s list.[46]

As a marketing strategy

SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective like paid advertising through pay per click (PPC) campaigns, depending on the site operator’s goals.[47] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site’sconversion rate.[48]

SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[49] Search engines can change their algorithms, impacting a website’s placement, possibly resulting in a serious loss of traffic. According to Google’s CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[50] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[51]

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines’ market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[52] In markets outside the United States, Google’s share is often larger, and Google remains the dominant search engine worldwide as of 2007.[53] As of 2006, Google had an 85–90% market share in Germany.[54] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[54] As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.[55] That market share is achieved in a number of countries.

As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a localIP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[54]

Legal precedents

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing’s claim was that Google’s tactics to prevent spamdexingconstituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google’s motion to dismiss the complaint because SearchKing “failed to state a claim upon which relief may be granted.”[56][57]

In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. Kinderstart’s website was removed from Google’s index prior to the lawsuit and the amount of traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart’s complaint without leave to amend, and partially granted Google’s motion for Rule 11sanctions against KinderStart’s attorney, requiring him to pay part of Google’s legal expenses.[58][59]

See also

Notes

  1. Jump up^ Beel, Jöran and Gipp, Bela and Wilde, Erik (2010). “Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar and Co.” (PDF). Journal of Scholarly Publishing. pp. 176–190. Retrieved April 18, 2010.
  2. Jump up^ Brian Pinkerton. “Finding What People Want: Experiences with the WebCrawler”(PDF). The Second International WWW Conference Chicago, USA, October 17–20, 1994. Retrieved May 7, 2007.
  3. Jump up^ Danny Sullivan (June 14, 2004). “Who Invented the Term “Search Engine Optimization”?”. Search Engine Watch. Retrieved May 14, 2007.See Google groups thread.
  4. Jump up^ “Trademark/Service Mark Application, Principal Register”. Retrieved 30 May 2014.
  5. Jump up^ “Trade Name Certification”. State of Arizona.
  6. Jump up^ Cory Doctorow (August 26, 2001). “Metacrap: Putting the torch to seven straw-men of the meta-utopia”. e-LearningGuru. Archived from the original on April 9, 2007. Retrieved May 8, 2007.
  7. Jump up^ Pringle, G., Allison, L., and Dowe, D. (April 1998). “What is a tall poppy among web pages?”. Proc. 7th Int. World Wide Web Conference. Retrieved May 8,2007.
  8. Jump up^ Brin, Sergey and Page, Larry (1998). “The Anatomy of a Large-Scale Hypertextual Web Search Engine”. Proceedings of the seventh international conference on World Wide Web. pp. 107–117. RetrievedMay 8, 2007.
  9. Jump up^ “Google’s co-founders may not have the name recognition of say, Bill Gates, but give them time: Google hasn’t been around nearly as long as Microsoft.”. 2008-10-15.
  10. Jump up^ Thompson, Bill (December 19, 2003). “Is Google good for you?”. BBC News. Retrieved May 16, 2007.
  11. Jump up^ Zoltan Gyongyi and Hector Garcia-Molina (2005). “Link Spam Alliances”(PDF). Proceedings of the 31st VLDB Conference, Trondheim, Norway. Retrieved May 9, 2007.
  12. Jump up^ Hansell, Saul (June 3, 2007). “Google Keeps Tweaking Its Search Engine”. New York Times. Retrieved June 6, 2007.
  13. Jump up^ Danny Sullivan (September 29, 2005). “Rundown On Search Ranking Factors”.Search Engine Watch. Retrieved May 8, 2007.
  14. Jump up^ Christine Churchill (November 23, 2005). “Understanding Search Engine Patents”. Search Engine Watch. Retrieved May 8, 2007.
  15. Jump up^ “Google Personalized Search Leaves Google Labs”. searchenginewatch.com. Search Engine Watch. Retrieved September 5, 2009.
  16. Jump up^ “Will Personal Search Turn SEO On Its Ear? | WebProNews”. www.webpronews.com. Retrieved September 5, 2009.
  17. Jump up^ “8 Things We Learned About Google PageRank”. www.searchenginejournal.com. Retrieved August 17,2009.
  18. Jump up^ “PageRank sculpting”. Matt Cutts. Retrieved January 12, 2010.
  19. Jump up^ “Google Loses “Backwards Compatibility” On Paid Link Blocking & PageRank Sculpting”. searchengineland.com. Retrieved August 17, 2009.
  20. Jump up^ “Personalized Search for everyone”. Google. Retrieved December 14, 2009.
  21. Jump up^ “Our new search index: Caffeine”. Google: Official Blog. Retrieved May 10, 2014.
  22. Jump up^ “Relevance Meets Real Time Web”. Google Blog.
  23. Jump up^ “Google Search Quality Updates”. Google Blog.
  24. Jump up^ “What You Need to Know About Google’s Penguin Update”. Inc.com.
  25. Jump up^ Laurie J. Flynn (November 11, 1996). “Desperately Seeking Surfers”. New York Times. Retrieved May 9, 2007.
  26. Jump up^ “AIRWeb”. Adversarial Information Retrieval on the Web, annual conference. Retrieved Oct 4, 2012.
  27. Jump up^ David Kesmodel (September 22, 2005). “Sites Get Dropped by Search Engines After Trying to ‘Optimize’ Rankings”. Wall Street Journal. Retrieved July 30, 2008.
  28. Jump up^ Adam L. Penenberg (September 8, 2005). “Legal Showdown in Search Fracas”.Wired Magazine. Retrieved May 9, 2007.
  29. Jump up^ Matt Cutts (February 2, 2006). “Confirming a penalty”. mattcutts.com/blog. Retrieved May 9, 2007.
  30. ^ Jump up to:a b “Google’s Guidelines on Site Design”. google.com. Retrieved April 18, 2007.
  31. ^ Jump up to:a b “Bing Webmaster Guidelines”. bing.com. Retrieved September 11, 2014.
  32. Jump up^ “Sitemaps”. google.com. Retrieved May 4, 2012.
  33. Jump up^ “Submitting To Directories: Yahoo & The Open Directory”. Search Engine Watch. March 12, 2007. Retrieved May 15, 2007.
  34. Jump up^ “What is a Sitemap file and why should I have one?”. google.com. Retrieved March 19, 2007.
  35. Jump up^ “Submitting To Search Crawlers: Google, Yahoo, Ask & Microsoft’s Live Search”. Search Engine Watch. March 12, 2007. RetrievedMay 15, 2007.
  36. Jump up^ “Yahoo Search Submit – Closed in Q4 of 2009”. rickramos.com. Retrieved 2014-01-20.
  37. Jump up^ Cho, J., Garcia-Molina, H. (1998). “Efficient crawling through URL ordering”. Proceedings of the seventh conference on World Wide Web, Brisbane, Australia. RetrievedMay 9, 2007.
  38. Jump up^ “Newspapers Amok! New York Times Spamming Google? LA Times Hijacking Cars.com?”.Search Engine Land. May 8, 2007. Retrieved May 9, 2007.
  39. ^ Jump up to:a b “The Most Important SEO Strategy”. clickz.com. ClickZ. RetrievedApril 18, 2010.
  40. Jump up^ “Bing – Partnering to help solve duplicate content issues – Webmaster Blog – Bing Community”. www.bing.com. Retrieved October 30, 2009.
  41. Jump up^ Andrew Goodman. “Search Engine Showdown: Black hats vs. White hats at SES”. SearchEngineWatch. Retrieved May 9, 2007.
  42. Jump up^ Jill Whalen (November 16, 2004). “Black Hat/White Hat Search Engine Optimization”. searchengineguide.com. Retrieved May 9, 2007.
  43. Jump up^ “What’s an SEO? Does Google recommend working with companies that offer to make my site Google-friendly?”. google.com. RetrievedApril 18, 2007.
  44. Jump up^ Andy Hagans (November 8, 2005). “High Accessibility Is Effective Search Engine Optimization”. A List Apart. RetrievedMay 9, 2007.
  45. Jump up^ Matt Cutts (February 4, 2006). “Ramping up on international webspam”. mattcutts.com/blog. Retrieved May 9, 2007.
  46. Jump up^ Matt Cutts (February 7, 2006). “Recent reinclusions”. mattcutts.com/blog. Retrieved May 9, 2007.
  47. Jump up^ “What SEO Isn’t”. blog.v7n.com. June 24, 2006. Retrieved May 16, 2007.
  48. Jump up^ Melissa Burdon (March 13, 2007). “The Battle Between Search Engine Optimization and Conversion: Who Wins?”. Grok.com. RetrievedMay 9, 2007.
  49. Jump up^ Andy Greenberg (April 30, 2007). “Condemned To Google Hell”. Forbes. Archived from the original on May 2, 2007. RetrievedMay 9, 2007.
  50. Jump up^ Matt McGee (September 21, 2011). “Schmidt’s testimony reveals how Google tests algorithm changes”.
  51. Jump up^ Jakob Nielsen (January 9, 2006). “Search Engines as Leeches on the Web”. useit.com. Retrieved May 14, 2007.
  52. Jump up^ Graham, Jefferson (August 26, 2003). “The search engine that could”. USA Today. Retrieved May 15, 2007.
  53. Jump up^ Greg Jarboe (February 22, 2007). “Stats Show Google Dominates the International Search Landscape”. Search Engine Watch. Retrieved May 15, 2007.
  54. ^ Jump up to:a b c Mike Grehan (April 3, 2006). “Search Engine Optimizing for Europe”. Click. Retrieved May 14, 2007.
  55. Jump up^ Jack Schofield (June 10, 2008). “Google UK closes in on 90% market share”. London: Guardian. Retrieved June 10, 2008.
  56. Jump up^ “Search King, Inc. v. Google Technology, Inc., CIV-02-1457-M” (PDF). docstoc.com. May 27, 2003. Retrieved May 23, 2008.
  57. Jump up^ Stefanie Olsen (May 30, 2003). “Judge dismisses suit against Google”. CNET. RetrievedMay 10, 2007.
  58. Jump up^ “Technology & Marketing Law Blog: KinderStart v. Google Dismissed—With Sanctions Against KinderStart’s Counsel”. blog.ericgoldman.org. Retrieved June 23, 2008.
  59. Jump up^ “Technology & Marketing Law Blog: Google Sued Over Rankings—KinderStart.com v. Google”. blog.ericgoldman.org. Retrieved June 23, 2008.

The Importance of Social Media

social-media

1. Social media allows you to spread your online reach faster

The old school link building tactics involved getting your links anywhere, even if those sites had little to no activity. Social media allows you to put your website and brand directly in front of potential customers. Produce good quality content and share it across social media and watch others interact and share it as well.

This is where the “social” part comes into play. Don’t just put it out there and hope it gets picked up. Interact with your social following and get them involved. When they feel involved and truly connected to your brand it results in sharing on a regular basis. While it can be hard, make sure to dedicate time each day to social interaction.

2. Active social media profiles drive high quality website traffic

The majority of links that are built by SEO companies do not bring traffic to your website. They are built to pass page rank to your site, which in return helps to increase the authority and power of your site, but as far as a traffic source it doesn’t do much. Sure, when done correctly in the form of guest blogging it can be a great traffic source, but we are talking about the majority of companies that just spread links all over in “hope” of improving rankings.

Social media drives REAL traffic to your website – traffic that is high quality! If someone is engaging with your company or brand on social media then there is an obvious connection or interest. These are the type of visitors that eventually turn into customers.

3. Social media are the new signals (think ”votes” for your websites popularity)

Back in the day a single link was a sign of a website’s popularity. The more links, the more popular the site was, therefore it would rank higher. It then became a race to build the most links without a care in the world in regards to quality. Google soon developed a way to give more weight to specific links and not value them all the same.

Then people began to game that system as well and then Google began to issue penalties to sites that were mass spamming just to build massive amounts of links. There will always individuals looking to win at the link game, but social signals are the newest way to gauge the popularity and authority of a website.

It is much harder to fake “real” social signals as they come from actual users. There are many fake social signal providers but the fake accounts do not stick long and soon disappear. Make sure to get your social media following to engage with your content and website, as it will help your SEO efforts.

4. Social media gives you a team of “link builders”

As you build your social media following on Facebook, twitter, Google+, LinkedIn, Pinterest, and others, you build up a large following of people that have the ability to share your content.

Every piece of content that hits your website should immediately be shared on your social media profiles. This can result in your social connections sharing the content also, generating 100% natural links, something that Google loves!

Also make sure that your website content is easily shareable by your visitors. It seems like an obvious tip, but you would be surprised at how many sites exist out there that do not have social sharing buttons. There are many options to match the look and feel of virtually every website out there.

5. Avoid Google penalties with less focus on links

It is no secret that Google looks down on aggressive link building done to game the search algorithm. They are constantly releasing updates and refreshes to remove low quality listings. Using social media together with your content marketing helps you build real, natural links that the search engines love and reward accordingly. Also, when you do it the right way there is no need to frantically check GWT every day to make sure the site wasn’t hit with a penalty.

When it comes down to it, SEO results are responsible for a large percentage of a website’s traffic, so you need to use every opportunity available to improve those rankings. Social media is a great way to help enhance your search engine optimization. The days of shortcuts and low quality SEO are over, so make sure you position yourself for long term success, and that involves getting social.

social media