Archive for August 2008

Google – Closer to Home?

Today Google announced the release of Gears Geolocation API, a system designed to provide websites with location-based information without the aid of GPS. The geolocation API can use GPS, cell-tower triangulation, or your computer’s IP address to find your location which is somewhat less reliable. It’s currently available for Internet Explorer, Firefox, and IE Mobile.
The Geolocation API has two JavaScript methods: getCurrentPosition() makes a single, one-off attempt to get a position fix, while watchPosition() watches the user’s position over time, and provides an update whenever the position changes. Both methods allow you to configure which sources of location information are used. Gears also keeps track of the best position fix obtained from these calls and makes it available as the lastPosition property. This is a simple way to get an approximate position fix with low cost in terms of both network and battery resources.
To use the (so far) two location-enabled lastminute.com and Rummble web apps you will need a Windows Mobile device that supports GPS or cell-id lookup (for example the Samsung Blackjack II and HTC Touch Dual).

WANTED: 6.8 Billion Googles

It seems to be common sense: if you are searching for something you should be the dominating filter. You in this case might be: male, 38 years old, residing in Southern California, speaking American English, drives a SUV, college education in … etc. The more you inject ‘you’ into a search the less results you would get and consequently you will spend less time looking through results that do not matter to you. So, shouldn’t there be 6.8 Billion Googles?

Instead search engines typically use one filter (algorithm) for all users of their engine, consequently delivering the same result for all its users. Index size does not matter. Even if the latest PR campaign by search engine Cuil seems to suggest the very same thing. More surprisingly, although Cuil apparently does not put any weight on keyword placement (specifically in domain and Title Tag) there are no signs of user centered design. Instead a search for the most obscure keywords will bring ten-thousands of ‘relevant’ SERP’s (search engine results pages) most of which will never see the back light of a monitor.

Do thousands of super-smart search engineers employed by the leading search engines miss the obvious? Not likely! So, why does it seem that the dominating search portals Google, Yahoo or MSN are holding on to patterns that seem inherently flawed?

The answer – again – seems obvious: there is little to no competitive pressure for the existing search engine to correct course. If reducing the the time to find the (only) relevant result by 50% means reducing profits by 50% would YOU do it?