A New Type of Insfrastructure for Local Systems (including local food systems) based on search

webadmin's picture

Today I read this amazing article:

http://www.kk.org/thetechnium/archives/2008/06/the_google_way.php

It's worth mentioning that their is no science without theory. The human mind is wired for creating theories. That is why there is a neural network built into our brain/mind system. It is wired to consider the un-manifested.

That being said, what Kevin Kelly describes is a lot like Stephen Wolfram's "A New Kind of Science". Not exactly the same, but closely related.

In both cases: a *search* based science. Not that science didn't already have "search". But now you can start to create billions of possible combinations of simulations and then search through them. So it is simulation/search based.

You can compress thousands, or even millions of years or human trial and error into this type of research. Plus, you walk away with billions of potential variations on your design, and ready ways and building blocks to adapt and change your design, and data about how different variations work under different conditions. You have a design DNA.

Right now, I use databases like google as a kind of bionic brain extension. So, I say bring on the new databases, the computer clusters, the algorithms. As long as we all have equal access to them.

The question for me is: what comes after search? When knowledge bases reach the exabyte level, or higher, and there are algorithms searching/crunching through them and finding patterns, relationships, testing all of the possible combinations, etc, how do handle and process what the machines are outputting? How do we avoid becoming a cybernetic society?

An even more fundamental question is: what can you create with all of this data, and/or with the systems that are used to collect and analyze it? How can it be used as a medium for expression? What are the new ways to "see" and "feel" the data? How can the data systems be grounded as an ecology that is self-balancing, so that it doesn't overrun our existence like a form of digital toxic pollution, and cause ill effects on living systems, like people being ruled by algorithmic output that is too much in one direction?

Another consideration: When we see that entities like google are the only ones that can wield and harness resources like those that process and hold petabyte databases, then we still have the potential for a power imbalance, where those who can hold and process the most data are the "wealthiest" in terms of capability, adaptability, access to knowledge.

I wonder how many people realize that existing technology possesses the building blocks to allow p2p networks to exceed the capability of any one entity like Google, etc?

I can see the possibilty of something simple, and elegant on a basic scale, that can scale-up easily, that provides a social utility for anyone who accesses it, using the combined resources of millions, or possibly even of people for storage and processing, that cannot be controlled for any specific exclusive purpose by any one person, and that could be controlled democratically by people opting out of participation should they not like the direction things are going in. We could have this today, and some people have already done it on a limited basis with things like SETI@Home, etc. What we need is more evolution in this area, more ways to use swarm super-computers, ways that are accessible by many people. A way to turn swarm super computers into an open social utility. This would is not out of our reach right now. We don't have to wait until networks are totally decentralized to build this into our social systems. We all have computers and operating systems, free cpu cycles, internet connections with extra bandwidth, and likely ideas about what we could do with those resources. There are already clients like http://boinc.berkeley.edu/, and systems like http://ceph.newdream.net/ or even http://www.bittorrent.com/ as building block upon which to improve. http://www.bittorrent.com/ could even work if enough people participated.

The point is, a p2p social computing/data utility could exist today even with just BOINC and bittorrent. I am now in discussion with communities, like http://socialsynergyweb.org/oardc/startpage about how they could apply evolutionary computing, simulation, datamining, and other modelling and search to local food systems. (see http://socialsynergyweb.org/oardc/local-food-systems-computer-modeling-g...)

A BOINC/bittorrent system could be used with applictions like http://www.urbansim.org/, http://code.google.com/p/optimaes/ and countless other open source simulation systems, not to mention datamining, GIS analysis, etc This can give local communities access to pwoerful research and development facilities. It could also be used to render and crunch numbers on design/ FEA (finite element analysis) etc.

The question is, why isn't this already happening? Probably primarily because we get a minimum of what we need from free/ad-based systems like Google. But we could have a lot more, even right now. There is a huge amount of inherent wealth and untapped commons available right now.

Groups audience: 

Group content visibility: 

Use group defaults

Attachments: 

 

Comments