Think about a visit from the eyes of a robot. He finds a site, usually from links on other web pages, then records the text from the page or saves the page.
He extracts just the text from the page without the HTML coding, including the title. He uses a mathematical equation to catalogue all the words on the site (this is called an algorithm). He follows the links on your site and retrieves any information he finds. All this information is then stored.
Now he knows how many pages you have, how significant your site is for a given keyword. How many “outside” links your site has, and can give your site a “score” based on how it is set up. This is how your site ends up being ranked high or low.
He then travels the links that leave your web site, and visits new web sites.