How does Google read my website video lesson looks at the process of Googlebot crawling a web site.
Google's technology of web crawling, indexing and even keyword ranking is all done automatically by using computer programs called algorithms. Google does not give special preference to any particular website and the amount of data Google "efficiently" crawls and indexes is huge. You can find out how Google actually works by visiting these two links below:
https://support.google.com/webmasters...
https://www.google.com.au/insidesearc...
Google's crawling and indexing starts by requesting your web page (URI) it sends an HTTP request from the list of links in its to be fetched next file, Googlebot (Google's web crawler (user-agent) continuously does this, and the timing of this crawl process is also managed automatically (aka crawl rate).
When it first requests your URI is looks for the file called robots.txt file and reads this file to see if there are any special rules such as "if it is disallowed" to crawl certain parts of your website. If this file is present and there are user-agent directives within it, then Googlebot obeys the rules and thus only accesses certain part your website as specified by you (within your robots.txt file).
To learn more about robots.txt file visit:
https://youtu.be/xNH8NENh5zs
Google's technology of web crawling, indexing and even keyword ranking is all done automatically by using computer programs called algorithms. Google does not give special preference to any particular website and the amount of data Google "efficiently" crawls and indexes is huge. You can find out how Google actually works by visiting these two links below:
https://support.google.com/webmasters...
https://www.google.com.au/insidesearc...
Google's crawling and indexing starts by requesting your web page (URI) it sends an HTTP request from the list of links in its to be fetched next file, Googlebot (Google's web crawler (user-agent) continuously does this, and the timing of this crawl process is also managed automatically (aka crawl rate).
When it first requests your URI is looks for the file called robots.txt file and reads this file to see if there are any special rules such as "if it is disallowed" to crawl certain parts of your website. If this file is present and there are user-agent directives within it, then Googlebot obeys the rules and thus only accesses certain part your website as specified by you (within your robots.txt file).
To learn more about robots.txt file visit:
https://youtu.be/xNH8NENh5zs
Category
📚
Learning