Saving the Internet (Efficient Information Distribution)
Planet Earth is approximately 4.54 billion years old. Modern humans first evolved around 200,000 years ago. To put this into perspective, if the world was one day old, humans have spent less than four seconds on its surface. However, in this relative short time frame mankind had a more dramatic effect on the earth’s environment than all other species combined. Together with other non-anthropogenic causes human behavior has increased the likelihood of human extinction within this century by 100%.
Researchers across different fields of science agree that the largest threats to the human species are man-made. However the public’s perception is widely warped by an overwhelming amount of unscientific material flooding the media. This problem is exacerbated by groups and individuals publishing information driven by commercial interest and beliefs unfiltered by critical thinking via the World Wide Web.
The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web, the infrastructure to support email, and peer-to-peer networks for file sharing and telephony. The web has been widely used by academia since the 1980s and was a promising concept for the exchange of knowledge. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. However, as early as 2000 researchers voiced concerns that the Internet is now weakest at doing that for which it was originally designed – exchanging raw data between researchers.
Organizing the world’s knowledge and making it universally accessible and useful
Knowledge is an awareness and understanding of information, which is acquired through experience or education. Knowledge can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject). In philosophy, the study of knowledge is labeled as epistemology; the philosopher Plato famously defined knowledge as “justified true belief” (although “well-justified true belief” is more complete as it accounts for the Gettier problems). Knowledge acquisition involves complex cognitive processes: perception, communication, and reasoning; while knowledge is also related to the capacity of acknowledgment in human beings.
For more than a decade now Google has been the dominant search engine, consistently used for more than 85% of global Internet searches according to NetMarketShare. During this time there have not been any serious attempts to challenge the market dominance of the company that claims to be on a mission to organize the world‘s information and make it universally accessible and useful. As outlined before (see this article) it is highly unlikely that Google will succeed in doing so and is indeed most likely to blame for the majority of ad-copy disguised as content flooding the world wide web. The fact that the most visible part of the Internet is influenced in this way by a for-profit entity should worry every thinking person. – Did you know that Google is now the third largest company in the US in terms of revenue and that more than 90% of the company’s revenue is being paid by advertisers ($5.77 Million per hour)?
Premises for a sustainable information management strategy
The following is an attempt to describe the fundamental premises needed for a sustainable information management strategy (“Search”) and potentially save the world wide web from the tragedy of the commons (and ‘yes’ put Google out off business).
Premise One: Information is a natural resource and needs to be treated as such (see natural resource management). Like all natural resources no single person, entity, group or culture can claim exclusive rights to information. Just as physical access to water needs to be available to any human being the information how to get to this resource is inseparably attached to it.
Premise Two: Access to information is a human right. To protect and promote essential human interests, especially the unique human capacity for freedom (see Andrew Fagan) access to information has to be free. Censorship as well as monopolized information organization (as de-facto practiced Google) is hence a human right’s violation. ‘Right’ being synonymous of ‘legal’ and antonymous of both ‘wrong’ and ‘illegal’, every ‘right’ of any human person is ipso facto a ‘legal right’ which deserves protection of law and legal remedy irrespective of having been written into the law, constitution or otherwise in any country (Ipso Facto Legal Rights Theory).
Premise Three: Knowledge and access to information are the natural enemies of belief (paraphrasing Plato). Belief is the enemy of progress. Or in my own words: belief is simply the absence of knowledge (more here). An effective information management system will be able to identify and discard information that violate basic principles of objectifiable reality or otherwise claim non-verifiable/falsifiable arguments and thus unscientific theory is not intrinsically false or inappropriate, however, as metaphysical theories might be true or contain truth, and are required to help inform science or structure scientific theories. Simply, to be scientific, a theory must predict at least some observation potentially refutable by observation.
Premise Four: Evolutionary organization of information cannot be democratic and must follow logic (i.e. peer review) not popularism. We are all “standing on the shoulders of giants” (Newton). No progress can be made without understanding the research and works created by notable thinkers of the past. – Social proof is anything but. Google’s philosophy that assumes that democracy on the web works is demonstrably false (read more here). A functional information management system will employ Hebbian theory. Just as biological neuroscience explains the adaptation of neurons in the brain during the learning process the same model can be utilized to describe a basic mechanism for “synaptic” plasticity in connected systems wherein an increase in synaptic efficacy arises from the presynaptic cell’s/nodes (connected ‘brains’) repeated and persistent stimulation of the postsynaptic unit.
Premise Five: Commercial interests corrupt and sway development. Consequently the potential of connected systems and connected knowledge has been underutilized and de facto halted (altruistic) progress as the majority of Internet users have accepted a marketing driven presentation layer – essentially censorship – as ‘status quo’.
Premise Six: DNA before intent and projection. What is needed is a objectified classification of the human element (which I label as “DNA”) within the network. Intent (i.e. Google (“search”)) and projection (i.e. Facebook) are non-directional approaches. A directional approach requires to locate the user on more than just the location level but also include the level of education and knowledge etc.
Premise Seven: Capturing the cognitive surplus. Cognitive surplus as used here extends over the element of crowd-sourcing by utilizing any type of engagement with any type of medium that can be contextually measured hence assigning a qualitative element. What is needed is the utilization of the latent potential inherent in the utilization of information itself. Exemplary: access of specific information from a specific individual contains a qualitative measure more relevant than any Hyperlink; i.e. a research scientist spending time on a website containing information relevant to his field of expertise as well as his/her engagement with other (digital) information related contextually as well as chronologically.
Premise Eight: Discarded information carries value. There is a strong tendency of researchers, editors, and pharmaceutical companies to report/publish experimental results that are positive (i.e. showing a significant finding) but very few results that are negative (i.e. supporting the null hypothesis) or inconclusive (Publication Bias). Effective information management will have to include negative results.
Premise Nine: Promote viral distribution of successful concepts while building ‘herd immunity’ against the adaption of destructive or dysfunctional paradigms. Herd immunity describes a form of immunity that occurs when the vaccination of a significant portion of a population (or herd) provides a measure of protection for individuals who have not developed immunity. Herd immunity theory proposes that, in contagious diseases that are transmitted from individual to individual, chains of infection are likely to be disrupted when large numbers of a population are immune or less susceptible to the disease. The greater the proportion of individuals who are resistant, the smaller the probability that a susceptible individual will come into contact with an infectious individual. The concept transcends to information and its consumption by individuals.
Premise Ten: Subjugate linguistic barriers. Humans are regarded like the primates for their social qualities. But beyond any other creature, humans are adept at utilizing systems of communication for self-expression, the exchange of ideas, and organization, and as such have created complex social structures composed of many cooperating and competing groups from families to nations. Social interactions between humans have established an extremely wide variety of values, social norms, and rituals, which together form the basis of human society but at the same time the diversity leads to misunderstanding and fear (of the unknown). An effective information management system will have to first overcome linguistic barriers before transcending into transfer of knowledge. Hence any approach starting at the semantic level will come short of this goal.
Premise Eleven: Create an effective marketplace for information exchange. Information is the ultimate ‘derivative’ of any asset. However, only a small fraction of information is available through organized market places, most of which shift the compensation to aggregation and distribution of the asset. An effective marketplace for information exchange will focus on the compensation of information creation and curation of information, hence putting the focus on the quality of information rather than its “liquidity” (accessibility).
Premise Twelve: Create an energy optimized information system that does not require new infrastructure investments. Each connected system must not only capture and disseminate its own data, but also serve as a relay for other system (or: nodes), that is, it must collaborate to propagate the data in the network (definition of a mesh network). Current ‘search engines’ are highly inefficient and add to the pollution of our environment. Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research. Though Google says it is in the forefront of green computing, its search engine generates high levels of CO2 because of the way it operates. When you type in a Google search for, say, “energy saving tips”, your request doesn’t go to just one server. It goes to several competing against each other. And it may even be sent to servers thousands of miles apart.
Premise Thirteen: Create a qualified Smart Mob collaboration tool (within the peer-to-peer layer) for impromptu response to crisis situation and to actively drive topic progress. A smart mob is a group that, contrary to the usual connotations of a mob, behaves intelligently or efficiently because of its exponentially increasing network links. This network enables people to connect to information and others, allowing a form of social coordination (The concept was introduced by Howard Rheingold in his book Smart Mobs: The Next Social Revolution).
Conclusion. What is needed is a search engine in form of an open source, independent, distributed, search network and storage system (“Wiki”) designed to utilize resources of all machines and all humans, including their relationship to the document (owner, user, contributor etc.) as well as their profile and expertise, fostering logic-driven, evolution like progress through compensation of contribution, while overcoming artificial barriers such as culture and language in a mesh networked structure.
Prototype. A prototype should combine the following elements:
- browser/digital document viewer (based on an open source SDK – likely Chrome);
- advertising filter
- file sharing, based on
- group settings and/or
- user classification/identification through artificial intelligence.