Wednesday, January 5, 2011

Hill readers: How weird are you?

By Mark Mellman, The Hill

We each like to believe we are pretty normal, pretty much like everyone else. Other people know what we know; we see things the way most other people do.

However, the simple truth is that if you are reading this paper, you are most likely a bit of a freak. And if you are not at the freakish edge of the public at-large, you are certainly out of step with fellow Hill readers, because the rest of us are quite different from most of America.

It’s evident in what we pay attention to. Whatever our partisan affiliation, we watched with rapt attention as Scott Brown took Ted Kennedy’s Senate seat last year. Pew research revealed that only 36 percent of our fellow Americans joined us in paying attention to that stunning turnaround. By December, we were engrossed in every twist and turn of the debate over the Bush tax cuts. Just 37 percent of the public paid attention to that. The midterms were all-consuming for us, but fewer than half the public focused even on the results of our biennial exercise in democracy while, at the height of the election season itself, a mere 29 percent were paying close attention to news about the elections.

Of course, Americans do not tune out all news, all the time. Sixty percent followed the earthquake in Haiti, 49 percent were fixated on the trapped miners in Chile. Speaking of Chile, there was about as much interest in news about December’s cold snap as in November’s election results.

Face it — we concentrate on a different set of issues than does the public.

Those differences in attention generate differences in knowledge. I will wager that every single person reading this column knows that John Boehner will become Speaker of the House, knowledge shared by just 38 percent of our fellow citizens. Indeed, 40 percent do not even realize that Republicans will have a majority in the new House. Pew also informs us that nearly one in five Americans believe the much reviled, former BP CEO Tony Hayward is actually the Prime Minister of Great Britain — more than can correctly identify David Cameron as the true incumbent.

Having more information gives us more opinions than typical voters. Again, I’d bet all of us had a view about START. However, only 16 percent of Americans heard a lot about the nuclear arms treaty with Russia, and nearly a third heard nothing at all. Even among those who claimed some knowledge before the Senate vote, Pew tells us almost a quarter had no opinion on whether the treaty should be ratified. Some one in six had no preference on the fate of the Bush tax cuts or on whether gays should be permitted to serve in the military. Heading into the election, a CBS/New York Times poll found nearly half the electorate holding no opinion whatsoever about the Tea Party. How many Hill readers fell into that category?

Sometimes the unique knowledge of the cognoscenti translates into very different perspectives on issues than those of the voters. While Republicans and Democrats disagree on the relative priorities implicated in the trade-off, partisans would tend to agree that balancing the budget requires either raising taxes or cutting programs. Three-quarters of the electorate starts the debate in a very different place, denying the need for trade-offs, asserting that if politicians only made smarter decisions we could eliminate the deficit without either cutting important programs or raising taxes.

We are weird, though we often wear our strangeness as a smug superiority, seeing our knowledge as a blessing. In many ways it is. However, when it comes to communicating with the public, knowledge can be a curse — a curse we will examine in a forthcoming column.

Mellman is president of The Mellman Group and has worked for Democratic candidates and causes since 1982. Current clients include the majority leaders of both the House and Senate.

Making Things Work: Solving Complex Problems in a Complex World

Yaneer Bar-Yam writes in the book’s Preface: “In recent years the rapidly changing world around us has been raising concerns about the ability of people to cope with change. Future Shock, The Ingenuity Gap, and other books describe the difficulty of people living in our complex world. Complexity may seem overwhelming but it is not a bad thing. The complexity of the world is a mirror reflection of ourselves working together to make the world work. We, together, are becoming increasingly complex. The reason we can do this is that we work together in increasingly effective ways. We are connected to each other in ways that allow us to respond as teams and organizations. This enables us to do things we would not be able to do by ourselves, not just in terms of amount of effort but in terms of complexity. Complex tasks require complex organizations. When we are part of a complex team we find the world a remarkably comfortable place, because we can act effectively while being protected from the complexity of the world. This feeling is like the experience of a cell in a body, protected from the environment, and contributing to the organism function. Today civilization is the organism we are part of. We are in the midst of a remarkable transition from the individual to the group, organization, and even to global civilization as a functioning unit. While this is a mind bending transition, it is a transition of opportunity for creating a world that works for everybody, on the global level and on the level of each individual.”

Later, he continues this train of thought: “Today we often describe the world around us as highly complex. Complexity manifests in everything from individual relationships to corporate challenges to concerns about the human condition and global welfare. As a global community, we are in the middle of a transition from the industrial to the information age, and this transformation is reflected and rereflected in everything around us. The amount of information that is flowing and the rate of change of society are both aspects of the growing complexity of our existence. As individuals, we have a hard time coping with all the information and change. In some sense more importantly, our society is also having difficulty coping with its own changes.

Our economic and social institutions, that we rely upon at critical times of our lives, including the health and education systems, are changing, not always gracefully, to meet the new challenges. Professional activities, from corporate management to systems engineering, require new approaches, insights and skills. Global concerns, such as environmental destruction and poverty - in developed and undeveloped nations - are becoming more pressing as these changes take place.

Despite major efforts to identify the solutions to these problems, they are often obscure and hidden from us. Even when we think we are making progress, the solutions we think of today may cause us more problems tomorrow. This is because complex problems do not lend themselves to easy solutions. Any action may have hidden effects that cause matters to become worse and the whole strategy we are using may be moving things in the wrong direction. Complex problems are the problems that persist-the problems that bounce back and continue to haunt us. People often go through a series of stages in dealing with such problems-from believing they are beyond hope, to galvanizing collective efforts of many people and dollars to address the problem, to despair, retreat, and rationalization. The progress made seems miniscule compared to the effort and resources expended. Even with all of the modern technological advances, it is easy to become pessimistic about the world today. There is hope, however, in the recognition that people can solve very complex problems when they work together effectively. Unfortunately, this is generally not how we respond when there are problems. We don't always realize the ability that we have when we work together. We tend to assign blame or responsibility to one individual.”

The author summarizes book: “Developing the ability to use a complex systems perspective requires new patterns of thinking. In the first section of this book some of the key complex systems ideas are described. These ideas -like emergence and interdependence-have to do with relationships between parts of a system and how these relationships lead to the behavior of the system. After all, society works because of how people interact with and relate to each other, not how each person acts separately. The results of the interactions between people are patterns of behavior. We will look at how patterns can arise from interactions without someone putting the parts of the pattern in place by telling each person what to do. Using our understanding of how neurons interact in the brain, we will show how the pattern of behavior can be made to serve a purpose. We will find that the type of pattern that arises can be related to how the system is organized - who can interact with whom. We will look more generally at the set of things a system can do, and how this set of actions is related to how it is organized. Some organizations are good at doing complex tasks, and some are not. Perhaps not surprisingly, centrally controlled or hierarchical organizations are not capable of highly complex tasks. This means that we have to figure out how to make distributed/networked organizations if we want to solve complex problems. Finally, we learn about evolution, how really complex systems (including distributed/networked organizations) can form and be effective without being planned (which is crucial because planning them doesn't work!). Counter to how evolution is usually discussed, it is not just about competition, it is always about both competition and cooperation. Competition and cooperation work together at different levels of organization, just as in team sports where players learn to cooperate because of team competition. Making an effective organization is making a successful team.”

He applies these principles to the following systems as examples:

  • Health care/medical system
  • Education system
  • Corporate management
  • International development
  • Military
  • Engineering
  • International terrorism

My understanding of his work, acquired only from reading this book, leads me to believe that what he is talking about are complicated or unorganized complex systems. I don’t believe that his approaches will work well for many structured complex systems. He mentions these only once and seems to dismiss their difficulties without explaining how. “Before we can explain how system problems arise and can be fixed, we have to understand something about how systems work. This is where science can help. For many years there has been a sense that chaos and complexity, promising new areas of scientific inquiry, have something fundamental to tell us about the world in which we live. James Gleick's classic book Chaos: Making a New Science (1987) and many other books in later years have raised popular awareness of these directions of research. Much of the focus has been on recognizing the intrinsic unpredictability of nature, and-by extension-of society. However, beyond the fascinating applications to turbulence, meteorology, and other complex problems in the natural world, complex systems science has more to tell us about the world-including human beings and their interactions-than just that it is unpredictable.”

There are two concepts in this book that I have found very helpful to my thinking:

  1. You need a complex system to solve a complex problem. I had intuited this earlier (Simple, Complicated or Complex), but he draws the implications out even more than I have. Humans are complex systems. Groups of humans and technology working together are an even more complex system. Therefore you can’t use a simple or a complex system to solve the problems of these types of systems. Even a single human as complex as he or she is, can’t grasp these types of systems. The only way we can solve the problems of these types of systems are with collaborative, creative groups of humans and technology. (See 1, 2 a Few and Many for descriptions of the types of complexity)
  2. Systems can appear simple, complicated or complex at different scales.

Bar-Yam summarizes his book this way: “To solve complex problems we must create effective complex organizations. The underlying challenge of this book is the question: How do we create organizations that are capable of being more complex than a single individual? Living with complexity is challenging, but we can and should clearly understand the nature of how it can be done, both for individuals and organizations. The complexity of each individual or organization must match the complexity of the task each is to perform. When we think about a highly complex problem, we are generally thinking about tasks that are more complex than a single individual can understand. Otherwise, complexity is not the main issue in solving it. If a problem is more complex than a single individual, the only way to solve it is to have a group of people-organized appropriately - solve it together. When an organization is highly complex it can only function by making sure that each individual does not have to face the complexity of the task of the organization as a whole. Otherwise failure will occur most of the time. This statement follows quite logically from the recognition of complexity in problems we are facing.

Our experience with organizing people is for large-scale problems that are not very complex. In this case the need for many people arises because many individuals must do the same thing to achieve a large impact. In this old reason for organizing people, a hierarchy works because it is designed to amplify what a single person knows and wants to do. However, hierarchies (and many modifications of them) cannot perform complex tasks or solve complex problems. Breaking up (subdividing) a complex task is not like breaking up a large scale task.

The challenge of solving complex problems thus requires us to understand how to organize people for collective and complex behavior. First, however, we have to give up the idea of centralizing, controlling, coordinating and planning in a conventional way. Such efforts are the first response of almost everybody today because of the effectiveness of this approach in the past. Instead, we need to be able to characterize the problem in order to identify the structure of the organization that can solve it, and then allow the processes of that organization to act. The internal processes of that organization can use the best of our planning and analysis tools. Still, ultimately, we must allow experimentation and evolutionary processes to guide us. By establishing a rapid learning process that affects individuals, teams and organizations, we can extend the reach of organizations, allowing them to solve highly complex problems.

I appreciate that I am only one human being and my understanding of the world is consequently quite bounded. Still, it is reasonable to hope that some of the concepts discussed here may be of use to you. Others will complement or contradict me as necessary.

The basic concepts that I hope to have contributed an appreciation for are as follows:

  • The functional importance of independence, separation and boundaries as counterpoints to the importance of interdependence, communication and integration;
  • The trade-offs in scale and complexity, where increasing the set of behaviors possible at one scale (complexity at that scale) requires a reduction in complexity at other scales;
  • The need for matching the complexity of the system at each scale to the complexity of the environment (task) at the same scale for the system to be successful;
  • The diverse nature of distributed networked systems that are not all the same thing (contrast, for example, the immune system and the nervous system), but can be understood from the same general principles;
  • he essential complementarity of competition and cooperation at different levels of organization;
  • The constructive nature of both competition and cooperation in forming complex systems;
  • The limitations of conventional planning in creating and managing complex systems and the essential importance of planned environments for evolutionary processes;
  • The practical utility of fundamental complex systems ideas;

Slightly less apparent but no less important are the recognition and appreciation of:

  • the profound paradoxical importance of individual and group differences as a universal property of complex systems;
  • the significance of specialization in effective collective behavior, including specialization of individuals and specialization of large subsystems;
  • the remarkable emergent behaviors that combine simple capabilities to allow dramatic system capabilities;
  • the universal nature of patterns of collective behavior, which serve as elementary building blocks of complex systems just as atoms do;
  • the ubiquity of pattern forming processes, differentiation, and particularly local-activation long-range inhibition mechanisms for such patterns.

Finally, along with the recognition of complex problems that we continue to face in this world, we have also pointed out the increasing complexity of society. This increasing complexity implies great capabilities. Indeed, it suggests that we, together, are becoming remarkably effective at solving complex problems in a complex world.”

Making Things Work: Solving Complex Problems in a Complex World, Yaneer Bar-Yam, NECSI Knowledge Press, 2004, 306pp

Tuesday, January 4, 2011

The Wealth of Networks: How Social Production Transforms Markets and Freedom

I normally write extensive reviews of the books I read. I’m not able to do this with this book. Yochai Benkler’s book is so comprehensive in scope and detail (515pp) that it defies summarization. Suffice it to say that this is the authoritative book on the subjects of networks and social production. It’s a difficult read (the academic language) but rewards the reader with an unsurpassed view of the landscape it covers. In the author’s own words in the acknowledgments, “Another great debt is to David Grais, who spent many hours mentoring me in my first law job, bought me my first copy of Strunk and White, and, for all practical purposes, taught me how to write in English; as he reads these words, he will be mortified, I fear, to be associated with a work of authorship as undisciplined as this, with so many excessively long sentences, replete with dependent clauses and unnecessarily complex formulations of quite simple ideas.” Lawrence Lessig comments, “In this book, Benkler establishes himself as the leading intellectual of the information age.”

What is written here is my attempt to cover the salient points of the introduction.

He begins with, “Information, knowledge, and culture are central to human freedom and human development. How they are produced and exchanged in our society critically affects the way we see the state of the world as it is and might be; who decides these questions; and how we, as societies and polities, come to understand what can and ought to be done. For more than 150 years, modern complex democracies have depended in large measure on an industrial information economy for these basic functions. In the past decade and a half, we have begun to see a radical change in the organization of information production. Enabled by technological change, we are beginning to see a series of economic, social, and cultural adaptations that make possible a radical transformation of how we make the information environment we occupy as autonomous individuals, citizens, and members of cultural and social groups. It seems passe´ today to speak of “the Internet revolution.” In some academic circles, it is positively naıve. But it should not be. The change brought about by the networked information environment is deep. It is structural. It goes to the very foundations of how liberal markets and liberal democracies have coevolved for almost two centuries.

A series of changes in the technologies, economic organization, and social practices of production in this environment has created new opportunities for how we make and exchange information, knowledge, and culture. These changes have increased the role of nonmarket and nonproprietary production, both by individuals alone and by cooperative efforts in a wide range of loosely or tightly woven collaborations. These newly emerging practices have seen remarkable success in areas as diverse as software development and investigative reporting, avant-garde video and multiplayer online games. Together, they hint at the emergence of a new information environment, one in which individuals are free to take a more active role than was possible in the industrial information economy of the twentieth century. This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information dependent global economy, as a mechanism to achieve improvements in human development everywhere.

The rise of greater scope for individual and cooperative nonmarket production of information and culture, however, threatens the incumbents of the industrial information economy. At the beginning of the twenty-first century, we find ourselves in the midst of a battle over the institutional ecology of the digital environment. A wide range of laws and institutions— from broad areas like telecommunications, copyright, or international trade regulation, to minutiae like the rules for registering domain names or whether digital television receivers will be required by law to recognize a particular code—are being tugged and warped in efforts to tilt the playing field toward one way of doing things or the other. How these battles turn out over the next decade or so will likely have a significant effect on how we come to know what is going on in the world we occupy, and to what extent and in what forms we will be able—as autonomous individuals, as citizens, and as participants in cultures and communities—to affect how we and others see the world as it is and as it might be.”

He explains the emergence of the networked information economy, “The most advanced economies in the world today have made two parallel shifts that, paradoxically, make possible a significant attenuation of the limitations that market-based production places on the pursuit of the political values central to liberal societies. The first move, in the making for more than a century, is to an economy centered on information (financial services, accounting, software, science) and cultural (films, music) production, and the manipulation of symbols (from making sneakers to branding them and manufacturing the cultural significance of the Swoosh). The second is the move to a communications environment built on cheap processors with high computation capabilities, interconnected in a pervasive network—the phenomenon we associate with the Internet. It is this second shift that allows for an increasing role for nonmarket production in the information and cultural production sector, organized in a radically more decentralized pattern than was true of this sector in the twentieth century. The first shift means that these new patterns of production—nonmarket and radically decentralized—will emerge, if permitted, at the core, rather than the periphery of the most advanced economies. It promises to enable social production and exchange to play a much larger role, alongside property- and market based production, than they ever have in modern democracies.”

He makes three observations about this emergence:

  1. Nonproprietary strategies have always been more important in information production than they were in the production of steel or automobiles, even when the economics of communication weighed in favor of industrial models.
  2. We have in fact seen the rise of nonmarket production to much greater importance. Individuals can reach and inform or edify millions around the world. Such a reach was simply unavailable to diversely motivated individuals before, unless they funneled their efforts through either market organizations or philanthropically or state-funded efforts.
  3. And, likely most radical, new, and difficult for observers to believe, is the rise of effective, large-scale cooperative efforts—peer production of information, knowledge, and culture.

He states that individuals have enhanced autonomy: “The networked information economy improves the practical capacities of individuals along three dimensions: (1) it improves their capacity to do more for and by themselves; (2) it enhances their capacity to do more in loose commonality with others, without being constrained to organize their relationship through a price system or in traditional hierarchical models of social and economic organization; and (3) it improves the capacity of individuals to do more in formal organizations that operate outside the market sphere. This enhanced autonomy is at the core of all the other improvements I describe. Individuals are using their newly expanded practical freedom to act and cooperate with others in ways that improve the practiced experience of democracy, justice and development, a critical culture, and community.”

The major battle to be fought is between open and proprietary models. “The battle over the relative salience of the proprietary, industrial models of information production and exchange and the emerging networked information economy is being carried out in the domain of the institutional ecology of the digital environment. In a wide range of contexts, a similar set of institutional questions is being contested: To what extent will resources necessary for information production and exchange be governed as a commons, free for all to use and biased in their availability in favor of none? To what extent will these resources be entirely proprietary, and available only to those functioning within the market or within traditional forms of well funded nonmarket action like the state and organized philanthropy? We see this battle played out at all layers of the information environment: the physical devices and network channels necessary to communicate; the existing information and cultural resources out of which new statements must be made; and the logical resources—the software and standards—necessary to translate what human beings want to say to each other into signals that machines can process and transmit. Its central question is whether there will, or will not, be a core common infrastructure that is governed as a commons and therefore available to anyone who wishes to participate in the networked information environment outside of the market-based, proprietary framework.”

He writes about four choices: “There are four methodological choices represented by the thesis that I have outlined up to this point, and therefore in this book as a whole, which require explication and defense. The first is that I assign a very significant role to technology. The second is that I offer an explanation centered on social relations, but operating in the domain of economics, rather than sociology. The third and fourth are more internal to liberal political theory. The third is that I am offering a liberal political theory, but taking a path that has usually been resisted in that literature—considering economic structure and the limits of the market and its supporting institutions from the perspective of freedom, rather than accepting the market as it is, and defending or criticizing adjustments through the lens of distributive justice. Fourth, my approach heavily emphasizes individual action in nonmarket relations. Much of the discussion revolves around the choice between markets and nonmarket social behavior. In much of it, the state plays no role, or is perceived as playing a primarily negative role, in a way that is alien to the progressive branches of liberal political thought. In this, it seems more of a libertarian or an anarchistic thesis than a liberal one. I do not completely discount the state, as I will explain. But I do suggest that what is special about our moment is the rising efficacy of individuals and loose, nonmarket affiliations as agents of political economy. Just like the market, the state will have to adjust to this new emerging modality of human action. Liberal political theory must first recognize and understand it before it can begin to renegotiate its agenda for the liberal state, progressive or otherwise.”

This book is well worth studying and would make a great text for discussion group. If anyone out there has read the book and would like to collaborate on such an endeavor, please let me know.

The Wealth of Networks: How Social Production Transforms Markets and Freedom, Yochai Benkler, Yale University Press, 2006, 515pp

The Wealth of Networks (PDF)

Monday, January 3, 2011

Corporate Governance and Complexity Theory

by Robert Kropp

"The authors use complexity theory to argue that the inclusion of the voices of all stakeholders leads to improved corporate governance.

SocialFunds.com -- Despite the increased attention to good corporate governance that has occurred over much of the last decade, failures of governance such as those that led to the financial crisis have continued. The situation has led to various calls for improvements in governance, from increased regulation to more power residing in shareowners.

In their book entitled "Corporate Governance and Complexity," the authors call for an approach to corporate governance that encompasses more than one or two approaches to the problem. Using the framework of complexity theory, they argue that the concerns of all stakeholders—regulators, institutional investors, management, employees, etc.—are inter-connected in a "corporate governance social ecosystem," in which the most effective approaches to improvements in corporate governance are not simply top-down, through increased government regulation, but simultaneously "at both the micro and macro levels of interaction."

Because the many dimensions of corporate governance "do not exist in isolation," the authors argue, "both the endogenous and exogenous conditions are in constant flux;" that is, conditions specific to each corporation, as well as the larger environments in which corporations act, have the potential to influence the governance practices of a firm. The relationship between management and shareowners, for instance, is of course paramount, as is the interaction between the corporation and regulators. However, the authors argue, while these relationships are important, they are "not sufficient to understand the intricate web of connectivity and interdependence within the corporate governance social ecosystem."

Defining feedback as "a process that influences behavior," that authors assert that the practice "maintains stability in a system" and "provides greater clarity through the consultation process, which often leads to greater engagement."

"Feedback is also necessary in the process of emergence," the authors continue, by which is meant the interaction among stakeholders that leads to the evolution of a new corporate culture. As an example of emergence, the authors point to the increasing emphasis upon the election of independent directors, which they attribute to such factors as "a change in society regarding the importance of ethical values in corporate behavior and in the relative value of non-financial objectives," as well as the increased activity of institutional investors.

According to the authors, the financial crisis provided evidence that "corporate governance at company and industry level, as well as regulation on corporate governance more widely, is deficient in the sense that it does not properly deal with the complex nature of these relationships and the potential conflicts of interests therein." Considering that many bank failures occurred in Anglo-Saxon countries such as the US and the UK, where shareowner rights are considered to be strongest, the authors suggest further that "no one corporate governance system is superior, despite the widely accepted view in the academic literature claiming that investor protection is higher in common-law countries (such as the UK and the US)."

It should be noted, however, that in the view of the authors the concept of shareowners is one that focuses on short-term gains, with the exception of the aforementioned institutional investors whose increased activity has led to a grater focus on the election of independent boards. Most observers would agree with the statement of the authors, that "one of the possible reasons for the latest series of corporate scandals may have been managerial remuneration packages that focused too much on short-term profits without any regard for the long-term future and survival of the organization and ignoring the basics of proper risk management." However, the authors attribute the development to the focus of directors on "the principle of shareholders' primacy in the short run," instead of the long-term interests of all stakeholders.

An important finding of the book is "that the mere threat of disciplinary action may force organizations to change their behavior and set in motion the creation of a new order." As an example, the authors point to the experience of Gap, the apparel company that came under fire in 2007 because of the use of child labor in its supply chains. Not only did Gap respond to stakeholder pressure by removing products associated with child labor from its shelves; it also instituted more stringent guidelines for its supply chain partners.

Complexity theory as applied to the governance of corporations may well be a subject too arcane for everyday application. Indeed, this review only makes reference to it inasmuch as its most general applications can be seen as relevant to sustainable investors and others who wish to effect improvements in corporate governance.

That said, what is of value in the theory is the democratization of the role of all stakeholders. In the theory as espoused by the authors, all stakeholders add their voices to the effort to improve corporate governance, and as a result of the added perspectives the chances for improved governance increase. It is appealing to consider that a democratic process can improve the governance of institutions such as corporations, which are better known for being managed autocratically."

Book Review: Corporate Governance and Complexity, Social Funds

Corporate Governance and Complexity Theory, Marc Goergen, Christine Malline, Eve Mitleton-kelly, Ahmed Al-Hawamdeh and Iris Hse-Yu Chiu, Edward Elgar, 2010, 141 pp

In Praise of Positive Deviance

From Utne Reader, January-February 2011

"What's the key to transforming a community that's merely surviving into one that's thriving? Perhaps just a single person.

The September 2010 issue of Ode features an excerpt from the book The Power of Positive Deviance by Richard Pascale, Jerry Sternin, and Monique Sternin. According to the authors, the theory of positive deviance holds that in every setting there's at least one person who strays from the norm-a positive deviant-someone who has found a way to buck the status quo and solve a problem despite the same odds that are stacked against everyone.

Such people have discovered crops able to survive the torrential conditions of floodplains and have transformed desert dunes into groves of saplings. These deviants don't have any special advantage-they just think outside the box. Positive deviance, the authors explain, "requires retraining ourselves to pay attention differently-awakening minds accustomed to overlooking outliers, and cultivating skepticism about the inevitable 'that's just the way it is:"

But identifying these statistical outliers isn't enough. The community needs to have a way to share the information, or the progress is simply isolated. The authors caution that "in the absence of a social process to disseminate innovation and incorporate it into the group repertoire, discovery bears few progeny:'

Positive deviance is most effective when a technical solution, such as a vaccine, doesn't exist, and where a problem is deeply rooted in the social fabric and mind-set of the culture. The concept has helped reduce gang violence in schools, corruption in Kenya, and minority dropout rates in California, proving that communities that are willing to embrace deviants can experience positive change."

The Power of Positive Deviance, Richard Pascale, Jerry Sternin, and Monique Sternin, Harvard Business Press, 2010, 256pp

The Working Class

Hard times are good times to rethink our attitudes about the fungibility of workers.

Bette Lynch Husted, Oregon Humanities

"Our current economic hard times have touched everyone, but there’s no question that some have been hurt more than others. And it doesn’t look as if that’s going to change anytime soon. So I wonder: should those of us whose lives are a bit easier be thinking about the attitude behind the idea that workers are fungible? Or that, in a human version of “just-in-time” inventory, it’s more efficient to have them appear to perform a specific task and then disappear? Should we be questioning the ongoing waste of human creativity and skill, as well as the increasingly vast disparity of wealth in our country? Have we, gradually and almost without noticing, been lured into accepting the unacceptable?

And, if this isn’t the kind of culture we want, what might we do about that?

Political candidates get themselves into hot water if they speak about “redistributing the wealth,” but eventually an economy will self-destruct if this doesn’t happen, and continue to happen. “The gift must always move,” as Native American cultures have been trying to teach us. We balk at this idea at least partially because we have been carefully taught that those “others”—the people who cut our grass and trim our trees, cut our meat and pick our fruit, those who unpack the boxes and stock the shelves and stand behind the counter to sell us what we need—don’t deserve the kind of lives white-collar workers have. It’s a belief that threatens the meaningfulness of every life, including our own"

Read More

Sunday, January 2, 2011

How Radio and TV Became Centralized and Enclosed Instead of Decentralized and Open

From The Wealth of Networks by Yochai Benkler

This very interesting to read given the parallels of is origins with the Internet and how business and the Federal Government worked to alter the course of history

“The introduction of radio was the next and only serious potential inflection point, prior to the emergence of the Internet, at which some portion of the public sphere could have developed away from the advertiser- supported mass-media model. In most of Europe, radio followed the path of state-controlled media, with variable degrees of freedom from the executive at different times and places. Britain developed the BBC, a public organization funded by government-imposed levies, but granted sufficient operational freedom to offer a genuine platform for a public sphere, as opposed to a reflection of the government's voice and agenda. While this model successfully developed what is perhaps the gold standard of broadcast journalism, it also grew as a largely elite institution throughout much of the twentieth century. The BBC model of state-based funding and monopoly with genuine editorial autonomy became the basis of the broadcast model in a number of former colonies: Canada and Australia adopted a hybrid model in the 1930S. This included a well-funded public broadcaster, but did not impose a monopoly in its favor, allowing commercial broadcasters to grow alongside it. Newly independent former colonies in the postwar era that became democracies, like India and Israel, adopted the model with monopoly, levy-based funding, and a degree of editorial independence. The most currently visible adoption of a hybrid model based on some state funding but with editorial freedom is AI Jazeera, the Arab satellite station partly funded by the Emir of Qatar, but apparently free to pursue its own editorial policy, whose coverage stands in sharp contrast to that of the state-run broadcasters in the region. In none of these BBC-like places did broadcast diverge from the basic centralized communications model of the mass media, but it followed a path distinct from the commercial mass media. Radio, and later television, was a more tightly controlled medium than was the printed press; its intake, filtering, and synthesis of public discourse were relatively insulated from the pressure of both markets, which typified the American model, and politics, which typified the state-owned broadcasters. These were instead controlled by the professional judgments of their management and journalists, and showed both the high professionalism that accompanied freedom along both those dimensions and the class and professional elite filters that typify those who control the media under that organizational model. The United States took a different path that eventually replicated, extended, and enhanced the commercial, advertiser-supported mass-media model originated in the printed press. This model was to become the template for the development of similar broadcasters alongside the state-owned and independent BBC-model channels adopted throughout much of the rest of the world, and of programming production for newer distribution technologies, like cable and satellite stations.

The birth of radio as a platform for the public sphere in the United States was on election night in 1920.7 Two stations broadcast the election returns as their launch pad for an entirely new medium - wireless broadcast to a wide audience. One was the Detroit News amateur station, 8MK, a broadcast that was framed and understood as an internal communication of a technical fraternity - the many amateurs who had been trained in radio communications for World War I and who then came to form a substantial and engaged technical community. The other was KDKA Pittsburgh, launched by Westinghouse as a bid to create demand for radio receivers of a kind that it had geared up to make during the war. Over the following four or five years, it was unclear which of these two models of communication would dominate the new medium. By 1926, however, the industrial structure that would lead radio to follow the path of commercial, advertiser-supported, concentrated mass media, dependent on government licensing and specializing in influencing its own regulatory oversight process was already in place.

Although this development had its roots in the industrial structure of radio production as it emerged from the first two decades of innovation and businesses in the twentieth century, it was shaped significantly by political - regulatory choices during the 1920S. At the turn of the twentieth century, radio was seen exclusively as a means of wireless telegraphy, emphasizing ship-to-shore and ship-to-ship communications. Although some amateurs experimented with voice programs, broadcast was a mode of point-to-point communications; entertainment was not seen as its function until the 1920S. The first decade and a half of radio in the United States saw rapid innovation and competition, followed by a series of patent suits aimed to consolidate control over the technology. By 1916, the ideal transmitter based on technology available at the time required licenses of patents held by Marconi, AT&T, General Electric (GE), and a few individuals. No licenses were in fact granted. The industry had reached stalemate. When the United States joined the war, however, the navy moved quickly to break the stalemate, effectively creating a compulsory cross-licensing scheme for war production, and brought in Westinghouse, the other major potential manufacturer of vacuum tubes alongside GE, as a participant in the industry. The two years following the war saw intervention by the U.S. government to assure that American radio industry would not be controlled by British Marconi because of concerns in the navy that British control over radio would render the United States vulnerable to the same tactic Britain used against Germany at the start of the war - cutting off all transoceanic telegraph communications. The navy brokered a deal in 1919 whereby a new company was created - the Radio Corporation of America (RCA) - which bought Marconi's American business. By early 1920, RCA, GE, and AT&T entered into a patent cross-licensing model that would allow each to produce for a market segment: RCA would control transoceanic wireless telegraphy, while GE and AT&T's Western Electric subsidiary would make radio transmitters and sell them under the RCA brand. This left Westinghouse with production facilities developed for the war, but shut out of the existing equipment markets by the patent pool. Launching KDKA Pittsburgh was part of its response:

Westinghouse would create demand for small receivers that it could manufacture without access to the patents held by the pool. The other part of its strategy consisted of acquiring patents that, within a few months, enabled Westinghouse to force its inclusion in the patent pool, redrawing the market division map to give Westinghouse 40 percent of the receiving equipment market. The first part of Westinghouse's strategy, adoption of broadcasting to generate demand for receivers, proved highly successful and in the long run more important. Within two years, there were receivers in 10 percent of American homes. Throughout the 1920S, equipment sales were big business.

Radio stations, however, were not dominated by the equipment manufacturers, or by anyone else for that matter, in the first few years. While the equipment manufacturers did build powerful stations like KDKA Pittsburgh, WJZ Newark, KYW Chicago (Westinghouse), and WGY Schenectady (GE), they did not sell advertising, but rather made their money from equipment sales. These stations did not, in any meaningful sense of the word, dominate the radio sphere in the first few years of radio, as the networks would indeed come to do within a decade. In November 1921, the first five licenses were issued by the Department of Commerce under the new category of "broadcasting" of "news, lectures, entertainment, etc." Within eight months, the department had issued another 453 licenses. Many of these went to universities, churches, and unions, as well as local shops hoping to attract business with their broadcasts. Universities, seeing radio as a vehicle for broadening their role, began broadcasting lectures and educational programming. Seventy-four institutes of higher learning operated stations by the end of 1922. The University of Nebraska offered two-credit courses whose lectures were transmitted over the air. Churches, newspapers, and department stores each forayed into this new space, much as we saw the emergence of Web sites for every organization over the course of the mid-1990s. Thousands of amateurs were experimenting with technical and format innovations. While receivers were substantially cheaper than transmitters, it was still possible to assemble and sell relatively cheap transmitters, for local communications, at prices sufficiently low that thousands of individual amateurs could take to the air. At this point in time, then, it was not yet foreordained that radio would follow the mass-media model, with a small number of well-funded speakers and hordes of passive listeners. Within a short period, however, a combination of technology, business practices, and regulatory decisions did in fact settle on the model, comprised of a small number of advertiser-supported national networks, that came to typify the American broadcast system throughout most of the rest of the century and that became the template for television as well.

Herbert Hoover, then secretary of commerce, played a pivotal role in this development. Throughout the first few years after the war, Hoover had positioned himself as the champion of making control over radio a private market affair, allying himself both with commercial radio interests and with the amateurs against the navy and the postal service, each of which sought some form of nationalization of radio similar to what would happen more or less everywhere else in the world. In 1922, Hoover assembled the first of four annual radio conferences, representing radio manufacturers, broadcasters, and some engineers and amateurs. This forum became Hoover's primary stage. Over the next four years, he used its annual meeting to derive policy recommendations, legitimacy, and cooperation for his regulatory action, all 'without a hint' of authority under the Radio Act of 1912. Hoover relied heavily on the rhetoric of public interest and on the support of amateurs to justify his system of private broadcasting coordinated by the Department of Commerce. From 1922 on, however, he followed a pattern that would systematically benefit large commercial broadcasters over small ones; commercial broadcasters over educational and religious broadcasters; and the one-to-many broadcasts over the point-to-point, small-scale wireless telephony and telegraphy that the amateurs were developing. After January 1922, the department inserted a limitation on amateur licenses, excluding from their coverage the broadcast of "weather reports, market reports, music, concerts, of speeches, news or similar information or entertainment." This, together with a Department of Commerce order to all amateurs to stop broadcasting at 360 meters (the wave assigned broadcasting), effectively limited amateurs to shortwave radio telephony and telegraphy in a set of frequencies then thought to be commercially insignificant. In the summer, the department assigned broadcasters, in addition to 360 meters, another band, at 400 meters. Licenses in this Class B category were reserved for transmitters operating at power levels of 500-1,000 watts, who did not use phonograph records. These limitations on Class B licenses made the newly created channel a feasible home only to broadcasters who could afford the much-more-expensive, high- powered transmitters and could arrange for live broadcasts, rather than simply play phonograph records. The success of this new frequency was not immediate, because many receivers could not tune out stations broadcasting at the two frequencies in order to listen to the other. Hoover, failing to move Congress to amend the radio law to provide him with the power necessary to regulate broadcasting, relied on the recommendations of the Second Radio Conference in 1923 as public support for adopting a new regime, and continued to act without legislative authority. He announced that the broadcast band would be divided in three: high-powered (500-1,000 watts) stations serving large areas would have no interference in those large areas, and would not share frequencies. They would transmit on frequencies between 300 and 545 meters. Medium-powered stations served smaller areas without interference, and would operate at assigned channels between 222 and 300 meters. The remaining low-powered stations would not be eliminated, as the bigger actors wanted, but would remain at 360 meters, with limited hours of operation and geographic reach. Many of these lower-powered broadcasters were educational and religious institutions that perceived Hoover's allocation as a preference for the RCA-GE-AT&T-Westinghouse alliance. Despite his protestations against commercial broadcasting ("If a speech by the President is to be used as the meat in a sandwich of two patent medicine advertisements, there will be no radio left"), Hoover consistently reserved clear channels and issued high-power licenses to commercial broadcasters. The final policy action based on the radio conferences came in 1925, when the Department of Commerce stopped issuing licenses. The result was a secondary market in licenses, in which some religious and educational stations were bought out by commercial concerns. These purchases further gravitated radio toward commercial ownership. The licensing preference for stations that could afford high-powered transmitters, long hours of operation, and compliance with high technical constraints continued after the Radio Act of 1927. As a practical matter, it led to assignment of twenty-one out of the twenty-four clear channel licenses created by the Federal Radio Commission to the newly created network-affiliated stations.

Over the course of this period, tensions also began to emerge within the patent alliance. The phenomenal success of receiver sales tempted Western Electric into that market. In the meantime, AT&T, almost by mistake, began to challenge GE, Westinghouse, and RCA in broadcasting as an outgrowth of its attempt to create a broadcast common-carriage facility. Despite the successes of broadcast and receiver sales, it was not clear in 1922-1923 how the cost of setting up and maintaining stations would be paid for. In England, a tax was levied on radio sets, and its revenue used to fund the BBC. No such proposal was considered in the United States, but the editor of Radio Broadcast proposed a national endowed fund, like those that support public libraries and museums, and in 1924, a committee of New York businessmen solicited public donations to fund broadcasters (the response was so pitiful that the funds were returned to their donors). AT&T was the only company to offer a solution. Building on its telephone service experience, it offered radio telephony to the public for a fee. Genuine wireless telephony, even mobile telephony, had been the subject of experimentation since the second decade of radio, but that was not what AT&T offered. In February 1922, AT&T established WEAF in New York, a broadcast station over which AT&T was to provide no programming of its own, but instead would enable the public or program providers to pay on a per-time basis. AT&T treated this service as a form of wireless telephony so that it would fall, under the patent alliance agreements of 1920, under the exclusive control of AT&T.

RCA, Westinghouse, and GE could not compete in this area. "Toll broadcasting" was not a success by its own terms. There was insufficient demand for communicating with the public to sustain a full schedule that would interest listeners tuning into the station. As a result, AT&T produced its own programming. In order to increase the potential audience for its transmissions while using its advantage in wired facilities, AT&T experimented with remote transmissions, such as live reports from sports events, and with simultaneous transmissions of its broadcasts by other stations, connected to its New York feed by cable. In its effort to launch toll broadcasting, AT&T found itself by mid- 1923 with the first functioning precursor to an advertiser-supported broadcast network.

The alliance members now threatened each other: AT&T threatened to enter into receiver manufacturing and broadcast, and the RCA alliance, with its powerful stations, threatened to adopt "toll broadcasting," or advertiser-supported radio. The patent allies submitted their dispute to an arbitrator, who was to interpret the 1920 agreements, reached at a time of wireless telegraphy, to divide the spoils of the broadcast world of 1924. In late 1924, me arbitrator found for RCA-GE-Westinghouse on almost all issues. Capitalizing on RCA's difficulties with the antitrust authorities and congressional hearings over aggressive monopolization practices in the receiving set market, however, AT&T countered that if the 1920 agreements meant what the arbitrator said they meant, they were a combination in restraint of trade to which AT&T would not adhere. Bargaining in the shadow of the mutual threats of contract and antitrust actions, the former allies reached a solution that formed the basis of future radio broadcasting. AT&T would leave broadcasting. A new company, owned by RCA, GE, and Westinghouse would be formed, and would purchase AT&T's stations. The new company would enter into a long-term contract with AT&T to provide the long-distance communications necessary to set up the broadcast network that David Sarnoff envisioned as the future of broadcast. This new entity would, in 1926, become the National Broadcasting Company (NBC). AT&T's WEAF station would become the center of one of NBC's two networks, and the division arrived at would thereafter form the basis of the broadcast system in the United States.
By the middle of 1926, then, the institutional and organizational elements that became the American broadcast system were, to a great extent, in place. The idea of government monopoly over broadcasting, which became dominant in Great Britain, Europe, and their former colonies, was forever abandoned. The idea of a private-property regime in spectrum, which had been advocated by commercial broadcasters to spur investment in broadcast, was rejected on the backdrop of other battles over conservation of federal re-sources. The Radio Act of 1927, passed by Congress in record speed a few months after a court invalidated Hoover's entire regulatory edifice as lacking legal foundation, enacted this framework as the basic structure of American broadcast. A relatively small group of commercial broadcasters and equipment manufacturers took the lead in broadcast development. A govern-mental regulatory agency, using a standard of "the public good," allocated frequency, time, and power assignments to minimize interference and to resolve conflicts. The public good, by and large, correlated to the needs of commercial broadcasters and their listeners. Later, the broadcast networks supplanted the patent alliance as the primary force to which the Federal Radio Commission paid heed. The early 1930S still saw battles over the degree of freedom that these networks had to pursue their own commercial interests, free of regulation (studied in Robert McChesney's work). By that point, however, the power of the broadcasters was already too great to be seriously challenged. Interests like those of the amateurs, whose romantic pioneering mantle still held strong purchase on the process, educational institutions, and religious organizations continued to exercise some force on the allocation and management of the spectrum. However, they were addressed on the periphery of the broadcast platform, leaving the public sphere to be largely mediated by a tiny number of commercial entities running a controlled, advertiser-supported platform of mass media. Following the settlement around radio, there were no more genuine inflection points in the structure of mass media. Television followed radio, and was even more concentrated. Cable networks and satellite networks varied to some extent, but retained the basic advertiser-supported model, oriented toward luring the widest possible audience to view the advertising that paid for the programming.”

The Wealth of Networks: How Social Production Transforms Markets and Freedom, Yochai Benkler, Yale University Press, 2006