We collect an astounding amount of digital information . But as the Economist lately pointed out in aspecial report , we ’ve long since exceed our power to store and serve it all . Big data is here , and it ’s have braggy problems .
Walmart ’s dealing databases are a whopping 2.5 petrabytes . There are more than 40 billion picture host by Facebook alone . When there ’s this much data swim around , it becomes intimately impossible to separate and analyze . And it ’s only enlarge faster : the amount of digital entropy increases tenfold every five year .
We ’ve also flow out of space . The Economist reports that the amount of data created will more than double the available entrepot by 2011 .

And the information we can store becomes more and more difficult to screen out for succeeding multiplication of researchers and clientele .
This may not seem like such a huge peck , but take a more late , hard-nosed example . To acquire the definitive word on the Lehman Brothers bankruptcy , court - constitute examiner Anton R. Valukas had to strain through 350 billion pages of electronic document . That ’s three quadrillion bite of data . So how ’d he look through all that information ?
Simple . Hedidn’t . rather , loose hunting parameter were used to cut the number of emails and document roughly in half , then teams of attorney pared down what was will to a “ doable ” 34 million pages . Valukas ’s final story was an expansive 2,200 pages long , but there ’s no elbow room he was capable to process all of the relevant document , or that he was able to tell the whole narration .

If there ’s hope to be found , it’sin metadata . Much like library cards kept you from having to read every book , Google arranges your hunt queries and Flickr your photos . Even the tatter on Gizmodo make it more achievable to discover relevant content . But while metadata gives things searchable labels , the fact that it ’s often crowd - sourced means that those label are at best inconsistent and at worst incomprehensible .
We ’ve also made some advancesvisualizing heavy datum , a relatively fresh field only because it ’s only recently become a necessity . Whether graphing stock mart data or turning large ball of school text into word clouds , it ’s imperative that we find agency to face at data point that our brains can process more well than they can foresighted strings of raw selective information :
The brain finds it leisurely to process information if it is presented as an icon rather than as words or numbers . The right hemisphere recognises contour and color . The left side of the brain processes info in an analytic and sequential way and is more active when people learn textbook or look at a spreadsheet . Looking through a numerical table takes a lot of mental attempt , but data deliver visually can be grasped in a few seconds . The brain identifies patterns , symmetry and relationship to make instant subliminal comparisons .

Processing information through image becomes ever more important if we ever hope to keep up with it .
We have a more exhaustive record of our life history and the world around us now than we ever have before . We can map the human genome in a calendar week , for goodness sake . All of which is wonderful ! We should utterly be leaving behind as much of a record of our existence as potential . But we should also figure out how to manage it , and give it , before big data balloons totally out of our ascendence . [ Economist ]
Memory [ Forever]is our workweek - retentive thoughtfulness of what it really mean when our computer memory , encoded in turn , flow in a million direction , and might unfeignedly live forever .

Daily Newsletter
Get the best technical school , science , and acculturation news program in your inbox daily .
News from the future , fork out to your present tense .
You May Also Like









![]()

