June 23, 2011
I always love passing by a road crew, to watch one person working while three watch. Having been the one person in the hole with a shovel actually doing some work, I thought I’d at least let you know I am at work on the next post here. I feel that may be particularly important after my long absence.
photo © 1939 The Library of Congress | more info (via: Wylio)I also wanted to let any surviving readers know that I have just updated the Excel spreadsheet, which you can find and download from the box.net app at the bottom of the right hand column of this blog. The reason this may be noteworthy, is that I’m trying to cultivate it as a tool for you.
I started this Excel sheet a couple of years ago, when I was posting/curating so many articles on social media that I often couldn’t tell if I’d already posted them someplace when I ran across one I thought was particularly worthy of republication on my LinkedIn group, Social Media Search and Forensics, on my Twitter feed, or elsewhere. I didn’t want to repeatedly republish such an article, so this was an internal tool for me.
When recently working on some seminars I’m doing on social media for lawyers, I went back to this resource, and thought that if I updated and improved it somewhat, it might actually be a tool of some value for those searching for articles on social media. To that end, I’ve started (but certainly not finished) updating it. I’ve also added some columns to make it easier (hopefully) to find articles, ethics opinons, court decisions, white papers, and other materials.
I’ve just passed 600+ sources on this list, so if you need to find something, such as articles on jury misconduct involving social media, you may find this spreadsheet of some value. I’m also looking for feedback, such as that recently donated by my new social media friend, Shawn Tuma. I’ve followed Shawn for a while on Twitter and on his blog, and often reposted valuable material I found there. He was also kind enough to leave me some comments on our mutual problem of more efficiently bookmarking the material we run across, as we research technology and social media issues for our clients and our various online publications. I think we both are looking for some true, Web 3.0 type help from readers who may, collaboratively, be able to help all of us sharpen the tools we use.
In any event, I should be posting a new article here soon (working title is: Are Judges Starting to “Get” Social Media?) and hope you’ll be able to find some interesting articles in the attached Excel spreadsheet.
December 13, 2010
I recently had the opportunity to attend a CIO Practicum at the University of Kentucky. The session, entitled, “Process Knowledge; The Left-Brain Side of Ignition,” was faciliated by futurist, Thornton May.
The primary issue involved our relatively new ability to digitize mission-critical business flows, which has provided us with a rich new source of organizational learning and competitive knowledge. The primary question was whether or not we are using that knowledge opportunity properly. The secondary question related to whether or not we were learning what we should.
I’ve been interested in information management and knowledge processing for quite some time. Maybe it is the ADD, or perhaps seeing both law firm and consulting clients do well or not so well with it, but it has remained an interest of mine for years. I’ve experimented with a host of KM applications to make life easier on myself and my consulting clients. I’ve read a ton of books, such as Taming the Information Tsunami, by Bill Bruck Ph.D., but eventually started to develop my own systems to keep track of court dates, client needs, topical information to archive and retrieve, etc.
photo © 1962 IMLS Digital Collections & Content | more info (via: Wylio)Several years ago, I worked with a consulting client that was attempting to merge two law departments. During the initial analysis stage of the project, I found that the director of one of the law departments regularly received a stack of reports on the corner of her desk. This “report” was a compilation of data various predecessors had felt important enough to warrant being turned into paper copies. The stack of paper was several inches high.
The new law director found that it took her an unconscionable amount of time to try to wade through the data, and likewise found that it regularly failed to provide information she needed most to properly manage her department. She then developed a list of new data to be reported and abandoned the old stack.
While it might be interesting to ask the former law department directors to review this decision to abandon the data they felt important to governing their operation, it might be more interesting to analyze what sort of data was collected at various points in time, compare it to how useful it was at that time to those who asked for it to be collected, and to try to spot trends and forecast the most expeditious methods of data collection and use in future.
photo © 2008 Juhan Sonin | more info (via: Wylio)I was somewhat surprised that most of the CIOs at the Practicum agreed that many decision makers in their organizations seemed to demand collection and reporting of so much data that, in the final analysis, this practice could actually be a source of managment paralysis. So much of the organization’s resources were devoted to collection of and analysis of minutiae, while core questions were overlooked, that critical decisions were constantly jeopardized.
I recently read an article by Scott Belsky, Beware of Reactionary Workflow. Belsky makes the point:
… we live our lives just trying to keep our heads above water. Our ability to prioritize and control our focus is crippled by an unyielding flow of incoming communication: email, texts, tweets, facebook messages, phone calls, and so on (and on).Without realizing it, most of us have entered the new era of what I call “reactionary workflow.” Rather than being proactive with our energy, we are acting in response to what is incoming. Having relinquished control over our focus, it has become harder and harder to embark on our work with intention.
photo © 2009 Jeff Keyzer | more info (via: Wylio)Some say my mind works in strange ways, but what this “information tsunami” brings to the surface to me is that, on the one hand, our ever increasing ability to “digitize” essentially everything is a “plus” in such areas as robotics, but on the other, it can lead to human paralysis. We are simply being out analyzed by the machines and “AI” systems we are building.
“Way back” in 1997, IBM’s Deep Blue robot defeated the reigning World Chess Champion Garry Kasparov. More recently, Rick Martin wrote about a “Chess Terminator” who played another former chess champion. Martin notes that “the robot is apparently quite energy efficient as well, as Kosteniuk has claimed that it can continue playing for 24 hours a day for three years straight.” Although this match resulted in a draw, it produced an element of drama unintended by the human creators:
The Chess Terminator does have some flaws, however. Note that around the 2:45 mark Kramnik extends his hand offering a draw, but the robot – since it’s not fitted with any kind of optical device – just keeps playing, very nearly taking off Kramnik’s hand in the process!
Being a science fiction “nut,” you don’t have to say “Terminator” more than once to get my imagination flowing. The word conjures up hours of watching movies of former “Governator” and “Terminator,” Arnold Alois Schwarzenegger, overpowering human kind in his role as a cyborg assassin, whose sole mission, in the original, 1984 film, is to destroy humans as part of a military artificial intelligence system called Skynet.
Skynet, of course, becomes self-aware in the movie, and then decides human error and relatively slow reaction time of humans is a threat to its mission of providing the fastest and most efficient response to enemy attack. Naturally, the scifi response calls for it to embark on destruction of humans through the use of self-aware cyborgs, such as the Terminator.
I’m not ready to give up my Roomba® vacuum cleaner, for fear it will evolve into a Terminator, but I do fear that some of the artificial intelligence systems are capable of outpacing human ability to understand and, perhaps, to control them. We seem to be more capable of instilling ever increasing amounts of data and data analysis capability into things we build, but not necessarily into ourselves.
photo © 2010 James Vaughan | more info (via: Wylio)In a host of movies, including such classics as The Time Machine and Forbidden Planet, a subtheme is that creators of powerful machines and systems build them to such perfection, and then become so dependent upon them, that some intervening holocost or their own arrogance eventually leads them to the point that they no longer control them. The result is never pretty, at least in the science fiction world.
One reason for my fascination, however, is that it sometimes provides a way to forecast the future. Without the need to stray too far from the genre, one can peruse such books as William Shatner’s I’m Working on That : A Trek From Science Fiction to Science Fact. In addition to chronicling a number of now common inventions foreshadowed by science fiction authors, Shatner points out the threat that unleashing “technological genies” from their bottles without due consideration for the consequences, can lead to a bad result for humans, perhaps including the “ultimate hubris,” of trying to play God.
Hindsight can be a wondeful thing, but obviously not as valuable as foresight. I do believe that in many ways history does repeat itself and that in looking back, we can often use analysis to help prevent making the same mistakes in future. As Albert Einstein is often quoted as saying, one of the definitions of insanity is “doing the same thing over and over again and expecting different results.” He is also credited with saying “the problems that exist in the world today cannot be solved by the level of thinking that created them.”
I’m sure many would challenge the following thoughts, but it seems to me we already know that:
- we have a rapidly increasing ability to “digitze” information so that, as one person said at my table at dinner during the CIO Practicum, we are heading toward the time when every cell in our body will have something like its own IP address;
- the amount of data we are collecting and attempting to analyze is starting to exceed our ability to effectively use it to accomplish the goals for which it is collected;
- much of the data being collected and reported is the result of legacy thinking no longer relevant to the “mission” of the data collectors;
- we are building ever more “intelligent” machines to sift through this data tsunami and react to it because they can do so more effectively than we humans can;
- as the amount of data being reported to us increases, and we are involved in an increasing level of multi-tasking, we may be decreasingly competent to effectively verify the accuracy of data or use it without relying upon “the machines.”
While reliance on our machines seems to never end well for humans in science fiction tales, with the possible exception of R2-D2 and a few others, we seem hardly capable of stopping our progress toward creating more intelligent machines. Creating machines that free us, like my Roomba® robotic vacuum cleaner, hopefully allows us the “free time” to be more creative and to accomplish things other than developing another tool which might accidentally lead to the self-destruction of the human race. There are clearly a multitude of robotic creations designed to save human lives. Others, such as the system highlighted in the following short clip, can be both useful and a way to show the “cutting edge” technology available for use by a major world bank:
At the risk of seeming like the technophobic character played by Will Smith in the movie, iRobot, loosely based on Isaac Asimov’s works, I do have to wonder if we’re not overlooking something in our race to the future. One possibility, based upon yet another scientific breakthrough, is “self-evolving data.” The thought of this came to me when I read an article by Boonsri Dickinson, How this self-healing material can sense when there’s trouble. Obviously a kindred spirit at some level, she notes:
I’m not ashamed to admit this, I loved watching Terminator. The cyborg assassin (a.k.a Arnold Schwarzenegger) always recovered from damage, no matter what happened. It turns out, the ability for material to really do this, isn’t that far-fetched after all.
- Engineer Henry Sodano brings a little of James Cameron’s science fiction movie into his lab at Arizona State University.
Sodano told me about how his self-healing material can sense when there’s trouble.
Essentially, Sodano thinks his polymer should be as smart as our bones. The material must know when it has a problem, know how to stop the crack from causing more damage and be able to fix it.
It’s the sensors embedded into the polymer material and external computer system that give the self-healing material the brains to know what’s going on.
Imagine a material engineered in such a way that it can fix itself just like Arnold’s character did in the Terminator series. How hard then, could it be to figure out a way to create a system for gathering whatever information you can, but engineering the system in such a way that:
- it learns what is most relevant now;
- over time is able to use the data, without human intervention, to fine tune the way it gathers, archives, and reports data in the future;
- is able to sort out and eliminate “biased” data;
- is able to spot and report trends;
- is able to prioritize information it reports, so that it provides humans with a sort of “just in time” and “just the facts, Ma’am” data report; and
- facilitates the inherent wisdom of humans in using the data without interference from generations of legacy constraints.
Anton Chekhov is credited with having said: “knowledge is of no value unless you put it into practice” Jack Vinson is credited with having said: “Knowledge without Action is useless. Action without Knowledge is dangerous.” I agree with both of these gentlemen, but wonder if we can improve our odds of ending up in a different place than many of the science fiction tales would lead us to believe we might be destined.
photo © 2010 Ernst Vikne | more info (via: Wylio)When I started researching this topic, I was happy to find that what I was suggesting (i.e. concentrating more heavily on developing a “wise” self-evolving data system) had been in the works for quite some time. In fact, the topic includes everything from research into fuzzy systems to the Polytopia project, to patents granted for a “Self-evolving database and method of using same,” described in part as:
“Self-evolving indexing systems and methods store and retrieve information on multiple types of items. Indices evolve at least in part by providing subsequent users with summary comparison usage information based upon the choices of previous users, and in part by permitting subsequent users to can add new add classifications, parameters, and values instead of being limited by those previously used by others.”
Companies, such as IBM, have apparently been at the process of trying to develop a computer that works like the human brain. Their “SuperBrain” project has made interesting strides, but has raised a number of significant issues, such as:
- If humans are able to successfully re-create their organic brains with cold chips and circuits, using their natural intelligence, and technology grows at an exponential rate ( Moore’s Law), while evolution progresses at a relatively steady rate (Becoming Human)- is it fair to say that the computing power, complexity, diversity, and richness of IBM’s Blue Gene supercomputers will then outmatch that of the very brains which produced them?
- If the brain is the most evolved and sophisticated system known, then creating a computer that exceeds human cognitive capacities is really the triumph of Technology over Evolution – of Human Selection over Natural Selection – no? Does this then place us technically in a new stage of evolution? Are we are already in the age of self-evolution?
- Does it not make sense in terms of Darwinian Fitness that the brain should try to re-create itself in a form that is malleable and augmentable, allowing for the hand of Human Will to grow more robust than the hand Nature’s Will?
Likewise, there are numerous articles on everything from self-evolving Web sites and Web 3.0 to evolutionary fault repair of electronics in space applications. I am convinced that we have the ability to come at least very close to creation of truly self-aware machines in my lifetime. I have no doubt we already have, and will continue to create machines more capable of achieving a pre-defined purpose than can the human creators. Whether we should, is yet another matter.
If we continue to collect data without reaching the next level, in terms of what we collect and how we archive and retrieve it, then we will soon be buried in it. While we can instill knowledge in our machines, it seems apparent we may not have the wisdom to impart wisdom to them.
Looking at the history of one of mankind’s greatest evolutionary achievements, “civilized” government, we find that when we create a bureaucratic system, that system fairly quickly acts instinctively to preserve itself. For that reason, we often see that even temporary agencies find a way to sustain themselves well after their mission is achieved and planned term has ended.
photo © 2005 Garrette | more info (via: Wylio)
If we endow our systems, including machines, with ever greater knowledge and ability to adapt to accomplish their “mission,” might we not find that we have created a system or machine which, like the Skynet of Terminator fame, determines that it is more capable of achieving its mission without us or that we had misjudged what the mission should be and become determined to supplant the human mission with its own?