

I didn’t even know that such a thing existed. Rage has even made RSA easy to use with your iDisk if you are using iWeb to maintain a website.Īfter that the instructions give you a URL to set up a Google Webmaster Tools account. Using the “Publish Sitemap†tab, you upload the sitemap you created to your host server. 1 to 1.0, where 1.0 designates a page as very important on your site. Under the frequency column a drop down box appears for each page where you can designate how often the content is refreshed using tags such as “daily†or “never.†Under the “Priority†tab you use a scale of. Some of the other data you can tweak is the “Change Frequency,†and the “Priority†settings for each page. The ability to save these filters and use them again and again is a great feature.

After using the filters to get rid of stuff I squeezed that number down to 321 items. These filters are quite powerful and fairly easy to use. You’ll use RSA’s excellent filters to get rid of those URLs.Īfter scanning and indexing your site, you’ll use RSA’s filters to get rid of stuff you don’t want in your sitemap. RSA indexes everything on your site, not just items ending in “.html.†RSA will collect URLs for anything with a link, including graphics that link to something else and URLs that end in. This may take a while if you have many pages. RSA goes out to your site and begins to index all of your pages. Launch RSA and type your web site’s URL in the box provided. RSA is fairly intuitive to use and it comes with a good set of instructions, which was a good thing, since I did have to refer to them quite a bit to understand some of the numerous options RSA offers. RSA creates a properly formatted XML sitemap that the search bot can easily read, and it contains important information such as which pages are more important than others, and the frequency that a particular page is updated. Also, the longer a search bot is crawling your site, the more bandwidth it sucks up. Also, the data collected doesn’t tell the search bot how important one page might be over another, or how often a particular page is updated. The problem with just waiting for Google, or the other search engines, to find your pages is the hit and miss nature of the process. New web pages must be linked to or from another known page on the web in order to be crawled and indexed.†It then follows these links to other web pages. From Wikipedia, “Googlebot discovers pages by harvesting all of the links on every page it finds. Google uses a “Googlebot†to find and index pages. How do search engines like Google find sites and add pages to their vast indexes? They use “web-crawlers.†Web-crawling or “spidering†is a program, or automated script that browses the web in a methodical manner creating a copy of the pages it finds, which are added to the search engine’s index. You can use Rage Sitemap Automator (RSA) to create an XML sitemap and help the major search engines find your most important site content. Do you have a website? Have you been wondering why some of your web pages don’t show up in Google or other search engines? Or why less important site pages show up while the pages you think are more important do not? There is something you can do about that.
