Technical SEO is the optimisation of your domain with respect to servers, HTML, CSS, and Search Engines. Technical SEO revolves around coding and network expertise.
TSEO, What is it?
Technical Search Engine Optimisation – the new buzzword in modern SEO, however very underdeveloped. SEO has stayed constant for many years, as the algorithm used to rank results was hard-coded.
Hard coding – fix (data or parameters) in a program in such a way that they cannot be altered without modifying the program.Techtarget
This meant that the algorithm couldn’t learn, it was the Google Engineers who learned, and then changed the hard-coding of the algorithm. However, up until 2015 that was the reality. Since 2015, RankBrain was launched, which is an AI snippet of code that enables the algorithm itself to learn.
What does this mean for SEO? Well traditional SEO, such as on-page, off-page and content marketing, has since taken a plummet in results value. This is due to the algorithm learning what the common Search Engine beating tactics were. Which up until today has involved <h1> optimisation, keyword maximisation, query matching, URL optimisation and other factors.
TSEO revolves around functions that Search Engines can’t exist without, crawling, parsing, processing, and indexing. Search Engines must scrape data from websites, they must parse the information, then they must process, and finally they can index. Now each of these steps is inescapable for a Search Engine, and that’s where TSEO lies.
What can be optimised?
As stated above, Search Engines rely on:
Therefore within each step there are optimisable factors.
Crawling occurs with data scrapers that Search Engines send out to analyse websites. They take as much valuable information from your website as possible, back to the Search Engine for the following steps. Now some things that the crawler picks up it remembers, and gives your website a -1 or a +1 depending on it’s beneficially. These are the things that crawlers collect:
- <charset> Timezone
- <header> Relevant information (such as Google Analytics)
- <H1> Tags & Information
- <p> Information
- <nav> information & crawlability
- <img> information, description, and alt-tags
- <style.css> Design & load speed
- <.js> Files if readable & unmasked
- <index.php> critical requests depth directory
- <Response Codes> such as 404, 301, or 410 codes.
- <ld+json> Structured Data scripts
Now these are just some of the things present on your website consisting of millions of elements! Each of these potentially carries a +1 or -1 to the crawler, making its job easier or more difficult. For example a complex, or circular <nav> gives you a big -5 because it makes the crawlability so much more difficult.
Whereas if you have a very smooth <index.php> file that can call critical files very efficiently then you get a whopping +5 for responsiveness!
Now that this information has been collected, it’s time to put it into a format or place that Google can process it. Think of this is a linkage between Crawler & Processing, enabling Google’s AI algorithms to efficiently understand what your website is about.
Parsing usually applies to text – the act of reading text and converting it into a more useful in-memory format, “understanding” what it means to some extent. So for example, an XML parser will take the sequence of characters (or bytes) and convert them into elements, attributes etc.Stackoverflow
The crawler has collected information, but there’s no guarantee it can be parsed, unless it is collected in a way that it is parsable. Websites that lack parsability receive “crawl anomalies” because the Search Engine can’t process them, and therefore there is an error assumed.
How do you ensure it’s parsable?
Well you you need to make sure of the following:
- There are no conflicting lines of code
- Clear canonical tags
- Different page headings and <title>’s to avoid duplicate content
- No keyword stuffing
- No image stuffing
- No white text/white background
- Run parsability plugins
The Search Engine has been delivered a full load of information, in an understandable way, and must now categorise your website for ranking purposes. How do you ensure that the processed data is a reflection of what your website really is?
By that I mean, how does Google know what your site is about? Is your biotech blog about Teletubbies? How does Google know not to rank you for Penguin searches?
Well that’s based on what’s been collected and parsed. By ensuring the right things are crawled, and ensuring all of the crawled things are parsed correctly, you provide Google with the best understanding of your website.
Factors that are crucial to processing for definition:
- On-Page keywords
- Image titles, captions, descriptions and alt-tags
- <h1> tag information
- Navigation headers, “services” for example would indicate point of sale
- No 403/404/410 errors
- Parsability of valuable information
- Internal links to other pages
- User specified canonical tags
By ensuring all of these are crawled, and parsed, ensures that Google can pick them up. This stage is closely related with on-page optimisation.
The most important part of any Search Engine, how is your website indexed, and do you have Response Codes impacting your performance?