The above word cloud shows the most common terms appearing on dozens of authoritative pages in Google SERP for the topic The semantic web and search. Other pages on Clickstream discuss this technique for finding semantic keywords and topics. The visual lets you see a holistic view of what content creators “in the know” use to in their writing.
The importance of semantic associations in organizing the web pre-dates the “social internet,” still thought of as Web 2.0, and fully conceived in 2001 by Tim Berners Lee when he said the Web should provide “well-defined meaning, better enabling computers and people to work in cooperation.” While the term Web 2.0 was widely accepted, adoption of web 3.0 as a new phase or one that is coming and has clear characteristics is still debated.
John Markoff is most cited for what a 3.0 generation would look like. He saw machine learning, AI, natural language search, and micro formats as integral. At the heart of the 3.0 definition is the intelligent Web. Most website content experts no longer refer to a numbered phase for the Internet for Web’s progression because 2.0 seems a bit “old” at this point. Put on your seat belts, the future is intelligent technology everywhere.
Keywords were demoted! Topics now rule the roost for ranking
2016 is the year content strategists and SEOs woke up to the importance of web pages ranked on the basis of topics instead of keywords. Good content strategy also means good UX. Users generally want a website that is comprehensive on a theme. There are very few products outside enterprise software which help users find topic gaps and “topic creep” (my term), where website content is too broad. I did a webinar on SEMrush which focused on techniques using software on this website developed to discover semantically related keyword phrases and topics.
The machine learning Google employed to rank pages based on comprehensive content vs. keyword placement took years to develop into a reliable algorithm, and now that Hummingbird is three years old, SEO’s and content planners are grappling with how to create holistic content on websites which rank better than strategic keyword placement.
If you want to understand differences in how google indexes based on semantics, a good place to start is to differentiate between structured markup, which lets developers add another layer of logic to inform computers on the Web vs. latent semantic indexing which is based on the search engine’s own web page comparisons of latent associations. Many recognize the intelligent Web from initiatives Google started pushing in recent years in the form of structured and unstructured semantic data that it uses to “learn” the intent of user searches.
Semantic Tagging and HTML 5
The World Wide Web Consortium (W3C) sets standards for web page markup and scripting which help search engines like Google find relationships in the data content creators piece together. It lords over standards changes (sorry, but the WC3 always seems like nobility given how they set the law of the land for web standards. In fact, their governing decisions are quite open and accessible). HTML 5 replaces much of HTML 4’s tagging for elements such as navigation and footer IDs. Semantic tags in version 5 include semantic components. The final update occurred in October of 2014. This update let’s page creators more easy mark-up relationships without having to add extra micro-scripts like schema. Think about how the below tags bring a contextual, semantic logic to the way a page can be read by a search engine.
In sum, we can think of the new semantic ranking Google outputs with Hummingbird as analyzing content as data, in all it’s complexities, vs. analyzing keywords without consideration of their complex relationships.
Please contact Van Buskirk to tighten the topic relevancy of your website.