Blockchain and Distributed Networks – Relaunching the World, the Wild, the Web

The planet we live on is some 4.5 billion years old. While people call it ‘our world’, Earth is naturally irreverent to ideas of ownership in general and human beings in particular. To put this into perspective, homo sapiens first evolved around 200,000 years ago. So, if the world was a day old, modern humans have spent less than four seconds on its surface.

In this relatively short time frame mankind truly made its marks on the world, destroying wild life and creating more environmental destruction than all other species over millions of years. Together with non-anthropogenic causes, people’s behavior has dramatically increased the likelihood of human extinction in the very near future.

Researchers across different fields of scientific research agree that the largest threats to the human species – including climate change – are man-made. However, the public’s perception is widely warped by an overwhelming amount of non-scientific material flooding the media. This problem is exacerbated by groups and individuals publishing information driven by beliefs and commercial interest, unfiltered by critical thinking and willfully ignorant to externalities.

The mostly forgotten purpose of the web

The Internet carries an extensive range of informational resources and services, most visibly the interlinked hypertext documents and applications of the World Wide Web, the infrastructure to support email, and peer-to-peer networks for file sharing, phone services and now money exchange. The web has been widely used by academia since the 1980s and was once a promising concept for the exchange of knowledge.

The popularization  of what was by the 1990s an international network, resulted in its commercialization and incorporation into virtually every aspect of modern  human life. As a result, as early as 2000 scientists voiced concerns that the Internet is now weakest at doing that for which it was indeed originally designed – exchanging knowledge between researchers.

Organizing the world’s knowledge

For more than a decade now Google has been the dominant “search engine” (an accepted, but largely inacurate term of the company’s DNA) for the World Wide Web, consistently used for more than 85% of global Internet searches according to NetMarketShare.

Since 2015 the search engine Google is a profit center of the public holding company Alphabet Inc. While Google’s advertising business at present is pretty much the only profitable enterprise of the holding company, the organzation has demonstrated strong ambitions to extend its search (read: advertising) dominance over the web and into all Internet connected devices.

Somewhat surprisingly, a decade has gone by without any serious attempts to challenge the market dominance of the company that claims to be on a mission to organize the world‘s information and make it universally accessible and useful. As outlined before it is highly unlikely that Google – or any for-profit entity – will succeed in doing so.

Search engines and here specifically Google are directly and indirectly responsible for the majority of ad-copy disguised as information flooding the World Wide Web today, having creating an entirely new industry that produces nothing of value: search engine optimization. Millions of people (mostly in low(er) wage countries) spend their days creating bogus content, hyperlinks to landing page and comment spam in otherwise useful online magazines and forums.

The fact that the most visible part of the Internet is influenced in this way by a for-profit entity should worry every thinking person, and has certainly attracted the attention of European watch-dogs. – Did you know that Google is now the third largest company in the US in terms of revenue and that more than 90% of the company’s revenue is being paid by advertisers ($5.8 Million per hour)?

Preface: Perspective and Priorities

Astronauts often describe a profound experience when viewing planet Earth from space for the first time. It’s worth to take at least a few steps beack when evaluating most topics, while spending 20% of time on the problem description and 80% on the solution.

Premises for sustainable networks (and information it carries)

The  following is an attempt to describe the fundamental premises needed for a sustainable information management strategy (“Search”) in service of the wellbeing of all living things, hopefully saving the  World Wide Web from the tragedy of the commons.

Premise One: The Internet is a commons – a shared resource in which each stakeholder has an equal interest (comunity control). No single person, entity, group or culture can claim exclusive rights to information. Just as physical access to water needs to be available to any human being the information how to get to this resource is inseparably attached to it. The opposite of collective rights is not private rights purchased from the collective, but of common rights that precede the collective.

Premise Two: Access to information must be considered a human right. To protect and promote essential human interests, especially the unique human capacity for freedom (see Andrew Fagan) access to information has to be free. Censorship as well as monopolized information organization (as de-facto practiced Google) is hence a human right’s violation. ‘Right’ being synonymous of ‘legal’ and antonymous of both ‘wrong’ and ‘illegal’, every ‘right’ of any human person is ipso facto a ‘legal right’ which deserves protection of law and legal remedy irrespective of having been written into the law, constitution or otherwise in any country (Ipso Facto Legal Rights Theory).

Premise Three: Knowledge and access to information are the natural enemies of belief (paraphrasing Plato). Belief is the enemy of progress. Or in my own words: belief is simply the absence of knowledge (more here). An effective information management system will be able to   identify and discard information that violate basic principles of  objectifiable reality or otherwise claim non-verifiable/falsifiable arguments and thus unscientific theory is not intrinsically false or inappropriate, however, as metaphysical theories might be true or contain truth, and are required to help inform science or structure scientific theories. Simply, to be scientific, a theory must predict at least some observation potentially refutable by observation.

Premise Four: Evolutionary organization of information cannot be democratic and must follow logic (i.e. peer review) not popularism. We are all “standing on the shoulders of giants” (Newton). No progress can be made without understanding the research and works created by notable thinkers of the past. – Social proof is anything but. Google’s philosophy that assumes that democracy on the web works is demonstrably false (read more here).  A functional information management system will employ Hebbian theory.  Just as biological neuroscience explains the adaptation of neurons in the brain during the learning process the same model can be utilized to describe a basic mechanism for “synaptic” plasticity in connected systems wherein an increase in synaptic efficacy arises from the presynaptic cell’s/nodes (connected ‘brains’) repeated and persistent stimulation of the postsynaptic unit.

Premise Five: Centralized commercial interests corrupt and sway development. Consequently the potential of connected systems and connected knowledge has been underutilized and de facto halted (altruistic) progress as the majority of Internet users have accepted a marketing driven presentation layer – essentially commercial censorship – as ‘status quo’ (Google is not a search engine – it is a filter). Misalignment of interests (shareholder, operator user) – a legacy of the industrial revolution (codified in for-profit corporations) – have created an environment of broken promises which must be fixed by smart contracts that realign platform users, reduce friction and make middlemen and consultants (lawyers, accountants) obsolete. To re-align platform (earth) participants (humans) we must create decentralized and autonomous organizations (DAO) which eliminate psychopathic structures such as current corporations.

Premise Six: DNA before intent and projection. What is needed is a objectified classification of the human element (which I label as “DNA”) within the network. Intent (i.e. Google (“search”)) and projection (i.e. Facebook) are non-directional approaches. A directional approach requires to locate the user on more than just the location level but also include the level of education and knowledge etc.

better search engine than google

Premise Seven: Capturing the cognitive surplus. Cognitive surplus as used here extends over the element of crowd-sourcing by utilizing any type of engagement with any type of medium that can be contextually measured hence assigning a qualitative element. What is needed is the utilization of the latent potential inherent in the utilization of information itself. Exemplary: access of specific information from a specific individual contains a qualitative measure more relevant than any Hyperlink; i.e. a research scientist spending time on a website containing information relevant to his field of expertise as well as his/her engagement with other (digital) information related contextually as well as chronologically.

Premise Eight: Discarded information carries value. There is a strong tendency of researchers, editors, and pharmaceutical companies to report and publish experimental results that are positive (i.e. showing a significant finding) but very few results that are negative (i.e. supporting the null hypothesis) or inconclusive (Publication Bias). Effective information management will  include negative results.

Premise Nine: Promote viral distribution of successful concepts while building ‘herd immunity’ against the adaption of destructive or dysfunctional paradigms. Herd immunity describes a form of immunity that occurs when the vaccination of a significant portion of a population (or herd) provides a measure of protection for individuals who have not developed immunity. Herd immunity theory proposes that, in contagious diseases that are transmitted from individual to individual, chains of infection are likely to be disrupted when large numbers of a population are immune or less susceptible to the disease. The greater the proportion of individuals who are resistant, the smaller the probability that a susceptible individual will come into contact with an infectious individual. The concept transcends to information and its consumption by individuals.

Premise Ten: Subjugate linguistic barriers. Humans are regarded like the primates for their social qualities. But beyond any other creature, humans are adept at utilizing systems of communication for self-expression, the exchange of ideas, and organization, and as such have created complex social structures composed of many cooperating and competing groups from families to nations. Social interactions between humans have established an extremely wide variety of values, social norms, and rituals, which together form the basis of human society but at the same time the diversity leads to misunderstanding and fear (of the unknown). An effective information management system will have to first overcome linguistic barriers before transcending into transfer of knowledge. Hence any approach starting at the semantic level will come short of this goal.

Premise Eleven: Create an effective marketplace for information exchange. Information is the ultimate ‘derivative’ of any asset. However, only a small fraction of information is available through organized market places, most of which shift the compensation to aggregation and distribution of the asset. An effective marketplace for information exchange will focus on the compensation of information creation and curation of information, hence putting the focus on the quality of information rather than its use for a derivative purpose (i.e. advertisin).

Premise Twelve: Create an energy optimized information system that does not require new infrastructure investments. Each connected system must not only capture and disseminate its own data, but also serve as a relay for other system (or: nodes), that is, it must collaborate to propagate the data in the network (definition of a mesh network). Current ‘search engines’ are highly inefficient and add to the pollution of our environment. Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research. Though Google says it is in the forefront of green computing, its search engine generates high levels of CO2 because of the way it operates. When you type in a Google search for, say, “energy saving tips”, your request doesn’t go to just one server. It goes to several competing against each other. And it may even be sent to servers thousands of miles apart.

Premise Thirteen: Create a qualified Smart Mob collaboration tool (within the peer-to-peer layer) for impromptu response to crisis situation and to actively drive topic progress. A smart mob is a group that, contrary to the usual connotations of a mob, behaves intelligently or efficiently because of its exponentially increasing network links. This network enables people to connect to information and others, allowing a form of social coordination (The concept was introduced by Howard Rheingold in his book Smart Mobs: The Next Social Revolution).

Conclusion. What is needed is a search engine in form of an open source, independentdistributed, search network and storage system (“Wiki”) designed to utilize resources of all machines and all humans, including their relationship to the document (owner, user, contributor etc.) as well as their profile and expertise, fostering logic-driven, evolution like progress through compensation of contribution, while overcoming artificial barriers such as culture and language in a mesh networked structure.

Prototype. A prototype might combine the following elements:

  • Browser and digital document viewer (based on an open source SDK );
  • Advertising filter and commercial intent filter;
  • Validation enginge (crypto/smart-contracts);
  • Content sharing (P2P), based on
  • Group settings and/or
  • Browser-embedded user classification/identification through artificial intelligence.