2219 is203 - Social and Organizational Issues of Information » Week 6

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

Week 6

Feb. 20th: Social Stratification and Information Technology

Warschauer, Mark. 2002. “Reconceptualizing the Digital Divide.” First Monday 7. [HTML]

Solop, Frederic. 2001. “Digital Democracy Comes of Age: Internet Voting and the 2000 Arizona Democratic Primary Election.” Political Science and Politics 34:289-293. [PDF]

Feb. 22nd: Privacy and Surveillance

Chapter 8 in Rheingold, Howard. 2002. Smart Mobs: The Next Social Revolution: Perseus Books.

Lankshear, Gloria and David Mason. 2001. “Within the Panopticon? Surveillance, Privacy and the Social Relations of Work in Two Call Centres.”

January 2nd, 2007
posted by:

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

15 Comments Add your own

  • 1. yliu  |  February 17th, 2007 at 9:50 pm

    This week’s papers seem to tie back rather well to prior readings. The digital kiosk and Information Age Town projects described in the Warschauer paper were case studies in the same vein as the water-boiling innovation in the Rogers reading on diffusion of innovation. As was in the water-boiling case, technological innovation requires some common social foundation and context of use to encourage adoption. Lacking this common foundation, we find that the townspeople of Ennis opted to reject the innovation of computer systems and networked technologies. The complexity of the innovation, a lack of perceived relative advantage, and a lack of compatibility with existing social systems (the unemployed congregating at the welfare office for information exchange and mutual support) caused many to end up selling the machines on the black market - a result that is very naturally explainable with Rogers’ framework. There are other forces at work beyond technological access, and an effective diffusion effort should attempt to counteract these forces in turn.

    This seems to question the latest efforts to bring technology to developing nations. One that comes to mind for me is the one laptop per child project. In the OLPC case, the project appears to deliver a technological solution to a complicated social problem of education. The OLPC project page provides listings of specifications, which is very technologically oriented (down to the data rates of its 128 MiB DDR266 DRAM - 133 MHz, btw) doesn’t focus on potential social implications or issues in adoption at all. Is it fair to draw an analogy between the Hole in the Wall scenario or the Ennis project in Warschauer and what is going on now with OLPC? It would be nice if delivering a custom laptop at low cost is the solution to education issues in developing nations, but Warschauer’s admonition to look beyond “devices and conduits” for true digital literacy is an interesting counterpoint.

    On the e-voting case study, there are (rather obviously) going to be demographic differences between the general population and those comfortable enough with the Internet to vote in elections with electronic ballots. I’d surmise that age, income, and education, the significant variables that the studied found in preference for Internet voting, would probably also tend to explain variations in Internet usage in general. In 2000, these variables might very sharply delineate these demographic blocs. However, in the future, I’d think that a general comfort with Internet usage and the expansion of Internet availability might reduce the influence of some of these variables. Many of us grew up with the concept of the Internet being a reasonable place to conduct our affairs, from banking to general leisure (and voting in student govt elections online, etc). Those of us with these experiences, I’d posit, would also probably use the Internet for general elections. As we get older, there is no good reason to believe that we’d stop using the Net in our grumpy old age.

    Security and reliability problems not withstanding, of course. The 2000 study mentioned voting irregularities, technological failures, etc as almost a footnote, but these turned out to be major problems that influenced perceptions of non-paper based voting systems.

    The critical views of social technology that the Rheingold paper referred to stated some common views against context-aware ubicomp, but I can’t help but wonder where we’ve heard this before. Ah yes, that’s right, social criticism of the telephone. “People pass each other unsmiling”, “eroding civility” “more mechanical, less humane”. Every generation seems the next wave of social technology as a civilization-destroying apple being offered by the snake of progress, tempting us out of our blissfully ignorant Eden. Hm. The “satanic mills” that arose out of enclosure movements were symbols of the Industrial Revolution, which didn’t turn out to be that bad a thing for civilization and societies. In some sense, this may be the incompatibilities with existing social values being exposed as the innovation (industrial production, unlanded labor, Internet usage, whatever) is being diffused. It is also useful to note that these incompatibilities did not tear apart civilization - society adapted, after some struggle.

    22ad
  • 2. mattchew  |  February 19th, 2007 at 1:58 pm

    The concept of literacy as a model for the adoption of information technology is appealing, in large part because the use of ICT requires a superset of literacy. Warshauer points out that the concept of literacy has meant different things at different times or in different societies, so isn’t the effective use of ICT simply the current manifestation of literacy at the start of the 21st century? One cannot use a computer without being able to interpret the symbols therein, be they textual, ideograms, or some other representation of an action, concept or thing. Unless one builds an instance of ICT solely for the isolated use of oneself, there needs to be some sort of shared interpretation of what the symbols used mean. This is true both for the ICT interface and the information contained within or accessed through the ITC. In industrialized societies this shared understanding, or literacy, is provided in part by state-mandated formal education.

    The educational opportunities within a society are generally controlled by the power-structure of that society, and the level of education offered to an individual or group is often dependent on that group or individual’s assigned role. In Europe in the middle ages, the Roman Catholic Church maintained an effective monopoly on literacy as a way of maintaining political influence over that continent for nearly a thousand years. Similarly, in several US states prior to the American civil war, it was forbidden for a slave to learn to read or write. In these primarily agrarian societies, there was no economic value added by mass education, and there was the perceived danger that it might cause people to question rather than accept authority. Most revolutions have been led by those with a higher level of education than the general population.

    With the advent of industrialized capitalism, there was a need for a trained workforce, and the economic advantage of mass education outweighed the potential political pitfalls. In societies such as South Africa under apartheid, compulsory education was explicitly designed to keep subjugated people subservient to the power structure, solely teaching them the skills necessary for them to be able to fulfill their assigned economic and political roles. This led to significant discontent among the student population there, who used The Pink Floyd song “Another Brick in the Wall pt. 2″ as an anthem in the anti-apartheid student revolts of the early 1980s. The song was then banned in South Africa and several other countries. One can wonder if popular disillusion with the educational system in the US and the UK at the time is indicated by the same song being a #1 hit in both countries. The societal roles for which Americans and Brits were being “educated” were in general much less onerous than those available under apartheid: hence their consumerist (buying music) rather than political (rebelling against the state)response.

    So if effective use of ICT is a form of literacy, diffused via education, that can provide economic advantage, the question for those in power in is how to selectively control and package the diffusion of ICT literacy in a way that maximizes economic benefit yet minimizes risk to the political status quo.

    Lyrics to Another Brick in the Wall Part II:
    http://www.azlyrics.com/lyrics/pinkfloyd/anotherbrickinthewallpartii.html

  • 3. elisa  |  February 19th, 2007 at 11:39 pm

    By rejecting the ‘bipolar societal split,’ Warschauer introduces the interesting concept of different degrees of access to technology and considers it more useful than the cruder division between “haves” and “have nots,” especially in the thorny ICT for development (horrendously acronymized into ICT4D) field. Being very interested in this area, I found his article both enlightening and frustrating: enlightening, because the idea of a spectrum of access to technology rather than a divide is very useful and usually overlooked; frustrating, because the examples he provides (the professor at UCLA, the student in Seoul, the rural activist in Indonesia) don’t seem very convincing: two of them are people who do not have personal, physical access to ICT, but certainly have the tools and capacity to access ICT, and use it in what Warschauer would probably call a meaningful way. The student in Seoul can access and use an internet café, the rural activist is part of a network that accesses ICT and one imagines that when she takes a break from her rural activism, she will go back to an office with computer and internet. I would define all these people as ‘haves’, to use the old divide-based category, as opposed to those who truly ‘have not’ because they cannot access, either directly or indirectly, ICT, and even if they could, they would not be able to use the information available. By distinguishing between different degrees of accessibility in this way, Warschauer ends up focusing on access / ownership of technology, exactly the problem he so well identifies earlier in the article. I think that the ‘have’ versus ‘have nots’ definition of digital divide can still be useful in certain circumstances; once a basic access to ICT is available, then it is important to concentrate in those Manuel Castells calls the ‘have-less’ in order to move forward.

    As Yiming, I immediately thought of Negroponte’s One Laptop Per Child project (that would be the OLPC acronym), and of how it focuses heavily on the ‘tool’ without giving much attention to, in Warschauer’s words, the “content and language, literacy and education, and community and institutional structures (which) must all be taken into account if meaningful access to new technologies is to be provided.’ Is the project doomed to failure, like its predecessors, then? A few months earlier I would have replied with a resounding ‘of course!,’ but now, perhaps because the destiny of graduate students is to become complete relativists before embracing their philosophical path, I am not so certain. Success depends on the parameters one uses to evaluate it. Trite and obvious as this might be, consider that we do not have yet any tools/parameters/benchmarks to evaluate the success or failure of ICT projects in developing countries. Some cases are quite clear-cut (the Egypt and USAid computers Warschauer mentions), but some other can be interpreted one way or the other, according to the interpreter’s agenda and viewpoint. The Indian children spent most of their time drawing with paint programs or playing computer games. Is this time wasted, or a new window opened? Maybe this time will sparkle in them the “motivation, desire, and confidence to read” (or paint, or learn programming), to borrow again Warschauer writing. What is the time frame we should use to judge success or failure of ICT projects? What are the parameters? Is ‘fun’ an acceptable outcome in ICT4D projects, or do all ICT4D computers have to have MasterPiece theatre embedded in the o.s.? Is wasting time on the web or on the computer a right that is earned only when living in a developed economy? I’d like to answer, but I have to go – I’m falling behind in contributing to the nine billion hours humans spend yearly playing solitaire

    2595
  • 4. karenhsu  |  February 20th, 2007 at 2:10 am

    Mark Warschauer presents interesting cases of problematic ICT programs that exemplify the shortcomings of approaches to technology diffusion that focus too narrowly on access to hardware and software. Instead, he emphasizes the need to additionally address human and social factors, and uses this reasoning to justify the broadening of the “digital divide” concept.

    As with Warschauer’s examples, the same social hurdles apply to the acceptance of Internet voting. For example, it’s not enough that the technology is there and works — how do we gain the confidence of voters, especially in a technology whose process isn’t very transparent to the user? There are some that advocate the opening up of the source (and compiler, too!) to not only promote robustness of code (more eyes = more discovered bugs… it’s also good practice to implement “security not by obscurity”), but also to show that no unfairness was built into the software. This, however, opens the door to other complications such as ensuring version consistency. Though, I’m not too sure that opaqueness is Internet voting’s greatest problem, as it also applies to other forms of e-voting (i.e., DRE systems, optical scan systems, etc.) that have already been used in binding elections for many years. In any case, Solop’s findings suggest that Internet voting promotes participation (in a good way), but I wonder if a larger portion of that increase comprises of uneducated votes. Since voting would be made much more accessible, might those who are eligible vote for the sake of voting without doing prior research?

  • 5. mcd  |  February 20th, 2007 at 9:14 pm

    Many of the defenses of the “If you build it, they will come” model of ICT4D (btw, 3li5a, I 2 think this CMC-era acronym is far from gr8, imho) discussed in class today struck me as techno-centric and circular, and I was glad to have read a particular line in Warschauer’s article: Literacy and “computer and Internet use bring no automatic benefit outside of [their] particular functions.” As Americans and iSchool students, we value ICT because we live in an economy increasingly built on its particular strengths. What good, though, does a child learning how to use pull-down menus and a joystick-mouse do if the only outlet of those skills is the kiosk on which he or she learned them?

    I am excited to see a context-sensitive approach, and I am reminded of Brown and Duguid’s rebuke of single-minded infoenthusiasm. ICT arguably has immeasurable potential to benefit societies, and decreasing cost and increasing access are noble goals. Technology, though, as we know too well, is not developed in a vacuum. I would propose that a potential factor driving the S-curve model of diffusion is that it takes time for technologies to integrate with existing structures and lifestyles. Technologies that start out as luxury items for early adopters gradually gain traction until (for some) becoming ubiquitous.

    For a technology to take hold requires successful integration into a culture. The failure and successes in Ireland Warschauer points out illustrate this well. Planning, integration, and training go much further than dropping machine-gods out of the blue into a culture not yet built around them.

  • 6. igorp  |  February 20th, 2007 at 10:28 pm

    “Smart Mobs” Rheingold

    I do agree that it’s easiest to just sit there and poke holes in someone’s work so I will start with the positives. Rheingold is an inspired writer and has no lack of ideas or seemingly depth of knowledge. There are several underlying themes here. The first seems to be “Whoa there, put the brakes on and ask the important questions!”. The second is “This whole thing stinks of the Panopticon and we should look into the power dynamics.” The third is “The new technology redefines us as humans.”

    Each of the three was well explored, if in a less than perfectly organized manner. I think the best part of the first theme was him asking the question “What questions our grandchildren will wish we had asked today?” I think that’s a very good way to think about it though a bit reminiscent of earth day slogans. However, it reminds me of the agenda list for lion’s club (or whatever) discussion of the new telephone technology that was printed in the telephone history book. Did it end up mattering to us that some stodgy old men were concerned the telephone will rot the youth? Maybe, if nothing else it made them feel better. The second theme of the Panopticon power dynamics was familiar but I particularly enjoyed Foucault’s paraphrasing that power “reaches into the very grain of individuals, touches their bodies and inserts itself into their actions and attitudes, their discourses, learning processes and everyday lives.” Amen. I think the truth of this is hard for must of us to appreciate until we experience it. This is why people flee dictatorial countries for the freer ones even if their livelihoods were guaranteed. I really believe power dynamics are wired very deep in our systems (there I go with the evolutionary crap again). As for the theme of how we define out humanity in the face of technology that is so fundamentally changing out lives, I think it’s perhaps the most important, and here novel, question to explore. With all the unease about smart mobs and ‘technique’ slowly turning us into cyborgs, I agree that we don’t have much choice besides moving on ahead, albeit with caution. “Creating knowledge technologies and applying them to … and larger scales of cooperative enterprise is inextricable from what is it to be human.”

    Ok, now all that over, a one sentence for the negative fans. A bit too much like a free thought rant that was vaguely massaged into a chapter/sub-headings format. The “Can Discipline Evolve?” sub section has absolutely nothing to do with discipline except one sentence stuck in at the end as an after thought. Ok, I’m over it.

    “Within the panopticon? Surveillance, privacy and the social relations of work in two call centeres.” Lankshear and Mason.

    Again, I’ve put on my positive hat. The paper was refreshing in questioning what seemed to be a dominant and convenient opinion – that the man oppresses when given the choice. Of course, the study is limited to two call centers in post industrial and western UK, but the findings sound very reasonable to me (the ultimate touchstone). Unless you’re running an operation where you wield significant power over your employees and an overabundance of replacements (ie a sweat shop in a third world country), you have strong motivation to keep employees happy. Thus it would seem that the checks on technological privacy invasion and surveillance are provided by context of a balanced society. A particularly vindictive manager could have conducted taping of employees above and beyond the required ½ hour. However, such an issue and the resulting oppressive environment could be easily recreated without fancy technology. For example, imagine a shift supervisor at a retail store who is constantly watching your back. Although technology expands options as to how to monitor people, it is kept in check by long evolved social standards.

    38e8
  • 7. matt earp  |  February 21st, 2007 at 9:57 am

    I found this to be a great piece. I’ve been reading a lot of critical theory recently, and more than one author has held Saint Rheingold up as a technological cheerleader that doesn’t take into account the limitations of the very innovations he’s championing. While some of “Smart Mobs” can read like that (especially when excepted in small quotable snippets), reading this, the final chapter, was especially refreshing to me, since I always suspected that many of Rheingold’s critics were being, well, overly-critical. Taken as a whole, chapter 8 of “Smart Mobs” is wonderfully lucid about the dangers that can come from unfettered technological championing, it asks us to think sensibly about the creations we’re making now for not just the next 5 year business plan but the long term, and it’s well situated within a history of cautionary tales, some by the very inventors of the technology we use today. “Threats to Liberty… Threats to Quality of Life … Threats to Human Dignity” … these are all questions that should concern us now as they have concerned us in the past.

    Yes, some of these worries are highly situation within the specific groups that Rheingold talks about. Japanese teenager’s social habits are not good predictors of elderly Brazilian’s habits and attitudes towards technology. But there’s no doubt in my mind that to some of the people who sell technology, who market technology, and who regulate technology and its use WISH that the whole world responded as though they were Japanese teenagers, the greatest consumers on the planet. And I don’t’ have a lot of hope that, left unregulated, without a voice of caution like Rheingold and his historical examples, the market’s concerns will fall on the side of the everyday person when they consider threats to liberty, quality of life, and human dignity on the massive scale they hope their technology will be accepted and used on. AT&T didn’t when they sold names and records to government. Jack Valenti didn’t when he became involved in the BPDG. There are fundamental power imbalances at work here, and within the context of someone like Valenti (who, along with the MPAA is really not so far removed from a more traditional example of a/The Mob/Mob Boss), Rheingold’s musings about the power of smart mobs starts to make sense as a counter model to existing power structures. The same Valenti who is still a leading and immensely powerful advocate of DRM despite his countless blunders and predictions about the death of the movie industry through technological innovations that don’t stem from Hollywood. Dude is the Teflon don, wrapped in film. This should cause us to worry.

    Rheingold’s chapter drew my mind back to another connection with The Mob and technology that I read about a couple years ago. The article is behind The Time’s firewall, but the gist of it is all in the abstract: “Federal prosecutors and FBI officials charge alleged Gambino crime family members and associates bilked unsuspecting consumers of over $200 million over five years in sophisticated nationwide scheme that added bogus charges on their phone bills; believe it is first time organized crime figures have been charged with using billing fraud known as ‘cramming;’ organized crime figures used company that consolidates billings for service providers, allowing them to bill through local phone companies;” (NYTimes February 11, 2004). To me, this is a perfect example of a lithe, agile organization, changing its business practices with the times, taking advantage of current technology to turn a profit that no doubt increased shareholder value. Except, A) it’s The Mob, B) ordinary people suffered because of it, and most importantly they C) people suffered because they were already confused by an aspect of their technology and willing to trust that everything was fine at the end of the day (who hasn’t looked at a phone bill and gone “huh?” and then decided to pay it anyway, that the change in question is on $2.38, and that the process of calling the service provider just doesn’t seem worth it.).

    This is of course, an extreme example. No companies I know of outside of certain hip-hop labels actively quote The Mob as source of insight for good business practices. However, I’m not sure, and I not Rheingold isn’t either, that people’s acceptance of much modern tech might lead to subtler re-enactments of the cramming scenario above. That’s the danger of the panopticon… if we’re all busy surveiling ourselves, what does that leave those with power time to do? If we trust them, then hopefully a whole laundry list of concerns (perhaps starting with thinking about ICT4D for example), but if they’re less than trustworthy, your phone bill might just start to creep up again.

  • 8. Sean_Carey  |  February 22nd, 2007 at 12:02 am

    Whenever I read a piece like “Always-On Panopticon” It really makes me wonder about where we are headed with technology. Not so much wonder, but afraid. The technology we use changes at such an alarming rate, ponder when will the repercussions come back to haunt us. We are still dealing with the consumption ideals of the industrial revolution. Every four or five years we throw out our old computers, cars, and clothes for new ones. We replace these articles in order to adopt the latest technology or fashion trend (Cellphones have even higher turn overs, I know people who replace their phone every year). With new features come new social practices, mainly in communicating information. All sorts of information sharing tools have popped up, like wikipedia and social networking sites. We voluntarily provide this information. As we provide this information, we create a collective consciousness. I can see Lankshear and Mason’s “Big Everybody” coming out of this collective consciousness. I fear that we’ve all become hooked on information, driven by both the addiction and the fear of being watched. But what problems will arise in the future from the choices we made today? I have noticed a decline in attention towards national and international problems. We seem to be more interested in reading within our electronic village than observing the broader issues that face our planet. What would be the end result of publishing our interests on the internet? Or having our credit card transactions recorded? I think this will lead to interesting research projects between the adaptors and non adaptors.

  • 9. n8agrin  |  February 22nd, 2007 at 1:08 am

    Rheingolds seemingly vehement opposition to the inevitable social consequences of information technology had me on edge until he revealed his apparent soft spot for human nature. He states:

    “I have used the term “smart mobs” because I believe the time is right to combine conscious cooperation, the fun kind, with the unconscious reciprocal altruism that is rooted in our genes.”

    So apparently humans aren’t all fighting their way up the tower of the Panopticon to be the omnipresent guard over their peers. Lankshear and Mason’s paper presented before Rheingolds’ “smart mobs” seems to confirm this. Even in social situations where a Panopticon model is possible the negative repercussions of enacting it seem to outweigh the potential benefits.

    Obviously, Lankshear and Mason’s paper studies the effects of the Panopticon model in the context of a business environment and not in the larger contexts which Rheingold seems to be so concerned about. Certainly, there are differences between call center operation, Virgin Mobile collecting location information about their customers and governments tracking your movements and interactions through insecure internet transmissions, wireless information and video monitors.

    While the Orwellian society based on technological injustices could be a feasible reality, it’s hard to believe that we are walking blindly into it. Instead, even Rheingolds’ concerns come off more as a mix between the individual’s dismissal of their rights and a degradation of basic human decency. Rheingold notes the changing social dynamics of being always on call, users who interrupt conversations to answer a cell phone, and the “hyper-coordination” of teenagers whose interpretation of time has become “soft”.

    Perhaps he has a point here, that social decency is changing. I always cringe when I see two people, clearly on a date, and one is on the phone in the middle of a busy restaurant. What isn’t recognized here is that social etiquette is fluid and socially acceptable actions do not always evolve at the same rate as our technological advancements. Take automobiles for example. Only until recently in the general history of the automobile have laws come into existence which regulate social etiquette, such as cars must stop to allow pedestrians or bikers to have the right of way. Similarly, laws are now arising which may ban the use of cellphones while users cross the street in cities. Many cafes, restaurants, barbers, and other service industries prominently display no-cellphone-use signs in their shops in order to enforce a sort of social expectation. It takes time for people to understand the social nuances of a technology, unfortunately for us, those rules are not and cannot be written into the technology’s proverbial instruction manual.

    Lankshear, interestingly, also shows that individuals are often more than willing under specific circumstances to work in a Panopticon model. Arguably there are many situations where existing in a Panopticon like social situation actually affords you with more positive consequences than negative ones. For example, the government may be able to triangulate my exact location based on my cell phone signal. That might seem like an obvious invasion of privacy, until the one time I need to dial 911, in which case the emergency rescue teams will be able to pinpoint my exact location. Is that affordance something I’m willing to trade for the possibility that the government may be spying on me?

    The reality, I believe, lies more in the Lankshear paper than the Rheingold paper. There simply is too much information for anyone or group to sort through it all and to uncover every social injustice or track down every user’s information trail and extract out all of the damning evidence which might be available in disparate sources. The Chinese government, try as it might, still cannot build a human firewall of trained vetters to handle the copious amounts of information currently available on the web. While the network has allowed us to share our information, each person is still an independent node capable of creating and distributing information how they see fit. It’s hard to image a computer that might be able to handle this sort of social dynamic, and tease the wheat from the chaff. At the same time, future technologies will probably allow greater tracking of incidents when major events occur. For example, the OJ Simpson murder trial might have turned out differently if the prosecutors had video, sound and image proof that Simpson had pulled into a gas station at a certain time, or perhaps his cell phone would betray him and give away his location coordinated with a time stamp to place him in a specific place and time.

    Of course then you would still have to provide the burden of proof that the information used to incarcerate was not misleading, or falsified. And here is where the social doomsday theories falls short of satisfactory. The crux of our society is not only rooted in the ideals of a democracy which allows its people to elect its representatives in government, but also in the independence of a judicial branch which ultimately protects the citizens of the society. Regardless of how much information might be gleaned in the near future from every individual, it is the laws and the transposing of those laws onto arising technologies which, to me at least, make it seem difficult to imagine a society tearing itself apart through new technology. That’s not to say that technology will not ‘degrade’ (though I prefer the less caustic word ‘change’) social norms, it is however an implication that we build technology as a society and evolve with it. It does not fall from space like an extinction inducing meteor, ending all of humanity in a brilliant flash.

    257c
  • 10. Ken-ichi  |  February 22nd, 2007 at 1:12 am

    Rheingold seems to express some fear about “technoanimism,” a situation in which people start treating machines more like humans and humans more like machines. I just don’t buy this. I had the same problem with the agents chapter in the Social Life of Information. Just because studies show that people start describing machines with personal pronouns and address them directly with cries of “You stupid machine!” doesn’t necessarily mean it’s the dawn of “technoanimism” where we treat all objects as if they had some independent agency, and we treat our fellow humans just a little further up the human-object continuum. I personally don’t see any anecdotal evidence of people treating computer systems any more humanely than other objects, and I don’t think we’ve encountered any empirical evidence in our readings. I describe my laptop with personal affection and occasionally get mad at my web browser when it misbehaves, but that’s only because I use these tools every day. If I drove my car as frequently, I’d feel the same way about my car, and, lo and behold, people have been anthropomorphizing cars for decades. Why no apoplexy and ululation over the perverse and inhuman man-on-machine emotion almost everyone shows for their cars?

    I really liked the notion of “technique” and it has been popping into my mind ever since I read about it. I feel like I am always thinking about parts of my life that could be more systematic, more rigorous, and that there are great benefits around the corner if I could just be more rational and consistent. LifeHacker seems like the ultimate expression of this mindset. But that rigor always comes at a price, which is not always obvious, or worth it (of course, sometimes it is. Witness the cult of the yellow notebook…).

    I appreciate Rheingold’s measured approach to this, questioning the effects of transhumanism while remaining optimistic that technology doesn’t have to repress human qualities incompatible with technique, and that it can actually amplify them. Technology is not intrinsically technique oriented. It can just as easily enable all kinds of unsystematic, random, and natural behavior as it can lead us to more machine-like existences. The paint brush is a technology, after all.

  • 11. Bernt Wahl  |  February 22nd, 2007 at 1:22 pm

    YouTube Politics

    In November 2006 the Republicans lost the U.S. Senate by one member because his domestic dispute was shown on YouTube. Users now are producer of content that can be seen by millions. In 2006 there are 3 million viewers of iPod video casting, by 2010 it is estimated there will be 25 million in the U.S.

    The Mobile Revolutions is having a effect economically as well socially. In a effort to bust efficiency, managers use email to quickly respond to work flow information rather than call. Youth prefer to email employees and teachers but still prefer to talk to friends on the phone. SMS is used when calling is inconvenient or too expensive. Cell phones have an incredible value in developing countries where information is scarce. Ring tones are a $ 4-5 billion market in the U.S. SMS with its 160-character message limit is becoming the preferred form of communications in many developing countries bringing information to a worldwide audience. Global cell-satellite, once seen as expensive can transmit information anywhere in the world for just a few cents in SMS form. In 2006, there were 60 million blogs worldwide more and more of them accessible through mobile devices. As mobile technology brings digital information to remote places, there is one major problem we will to over come, how do we teach youth to text message that are grammatically correct?

  • 12. Bernt Wahl  |  February 22nd, 2007 at 1:23 pm

    Smart Mobs:A Mobile Device Society Based on Mobile Media

    Mobile communication is fast becoming the force that binds global communities. In 1991 there were 60 million mobile devices, in 2006 there were 2 billion wireless devices deployed worldwide. Penetration rates in Japan and South Korea 90%, China 35%, India 5%, Pacific Rim Countries 42%, Latin America 72%, Africa 10%, Europe 60% and North America 67%.
    In the future we will have more mobile devices then people when inanimate objects will keep in contact with their owners through wireless communication. Business and personal communications are the prime driving forces, with unintended effects such as collective activism.

    Collective activism is using the Network as a primary means to call people into action. The Philippines president calls supporters to prevent a codetta; China rallies groups to protest against Japan becoming a permanent member of the UN Security Counsel, Ukraine assembles masses to prevent vote fraud. At the World Trade Organization 1999 meeting in Seattle protestors used mobile communications to disrupt meetings. Ironically it is technology that is being used to mass resistance to globalization. Corporations now feel the brunt of their own technology. The new technology also has the potential to undermine the control of governments through ‘mobile clustering’. News now is hard to censor with so many alternative ways of delivery.

    Banned protest becomes a collective action of family and friends. Social identities are formed through small intimate wireless communities based on collective identities. Instant communities form for raves parties, sports events, religious gatherings, dating gatherings and instant communities. YouTube politics were people form an opinion based on what they see posted by virtual groups online rather then the words of politicians e.g. SMS messages exposed cost Spain’s president the election when they swayed voters by accurately informing them that the railway bomb blast came from Al Qaeda rather than Bas Separatists as the state claimed. Later the police traced mobile phone records to trace Al Qaeda cell operatives that had recently talked to bombing suspects.

    In Africa farmers in remote villages relay on communicating to brokers for crop prices by cell phones. Phones that are brought by bicycle messenger periodically, recharged by means of power-generation from bicycle peddling.
    Developing country inhabitants put extraordinary value on mobile phones; it is their link to the outside world. In China a miner can spend up to 40% of his income on cell calls to find employment. Communications have a value to bring groups together.

    Source: Manuel Castells, iSchool U.C. Berkeley Lecture November 18, 2006

  • 13. evynn  |  February 22nd, 2007 at 3:29 pm

    Rheingold raises extremely important issues about the pervasive role technology plays, or is coming to play, in some parts of society, and whether its affects are positive or negative. One line near the beginning, though, jumped out as being somewhat off: “…computing and communication technologies would seduce consumers into voluntarily trading privacy for convenience.” It’s that one modifier, “voluntarily,” that caught me up– is it really voluntary? Does the average city-dweller voluntarily allow their face to be captured three hundred times per day on closed-circuit TV? Do we volunteer to let software companies shut down our software if, in their snooping, they notice some activity they disapprove of? I would replace “voluntarily”with “unwittingly.” I doubt the average consumer of technology realizes the extent to which those conveniences make them vulnerable to the oversight of government bodies, retailers and advertisers. I don’t think that Rheingold meant to imply such passivity, but the issues of voluntary-ness and convenience versus privacy have uncomfortable implications for those who build the technology.

    All this leads to an ethical tension for those designing and building technology, between hiding all the complexity that goes into building it, and making sure that people have a good idea of what people are capable of doing with it that may affect them, for bad or good. We need to consider where the principles of good design lead us. We aspire to create technology that is elegant and simple to the user’s eyes, to encapsulate the complexity in convenience. When people become used to these qualities in technology though, it is easy to forget that, indeed, someone built it, and they are no more aware of that person or corporation’s motives and goals than they are of the code running underneath the interface. They don’t connect their system’s slow down with a web page they visited last week that installed spyware. They do not read software licenses to find out that Microsoft will be keeping an eye on how they use their operating system.

    To bring in something from a reading from another class (I205, Braucher), market forces are not going to fix the problem of corporations hiding technology that may be unwelcome or even harmful. How can they when the “market” is unaware? The fact that technology can be used to track people and eavesdrop may be known on some level, but far fewer people know the extent to which it is sanctioned, how it is achieved or what, if anything, they can do about it.

    Rheingold discusses the negotiation involved in trying to limit access to ourselves while increasing our access to others. Something similar needs to happen between technologists and average technology users, so that the balance of power between watcher and watched is evened out. Technologists must be willing and active negotiators with people who will encounter the artifacts they build, if only to make them aware of when they are encountering them, in some cases bypassing the institutions that would prefer to make technology not just convenient, but invisible. Modern digital technology may have the potential to create a panopticon effect, but it can also allow everyone to see.

  • 14. daniela  |  February 24th, 2007 at 1:36 pm

    So not everyone embraces information technology; there are some skeptics. Maybe the increased (perceived or real) threat to privacy in online voting will trump the benefits of increased ease of access. Maybe we are at the mercy of our biology and react to social cues given off my machines the same as we do those by humans as Nass and Reeves’ research supports. But how accurate are these concerns? The unexpected outcomes from Ireland’s Information Age Town competition suggest we are quick to make assumptions of how we adopt and adapt to technology. I don’t think that assuming a worst-case scenario in our future computing is any more (or less) helpful than expecting technology to help solve large societal problems by shrinking knowledge and power divides.

    I am most intrigued by social interface theory and the idea that we may not have control over our emotional interaction with adopted technologies. If we are destined to treat mechanical artifacts as if they were people, I wonder why we should fight this intuition. There will and must be negative consequences to conflating humans with machines, yet, ultimately, worthwhile dialogues with computers must also form. I’d argue they already do. Without valuable emotional and intellectual stimulation, I think there is less incentive to interact with technology. It’s is not just the Ellul’s characterization of “technique” and its blind push toward efficiency and power that I am interested in as a user of technology. Why does there seems to be a moral obligation to worry that technology will enable people sharing a private space through computer mediated communication (e.g. via text messages) to threaten the people sharing a public space face-to-face? Maybe we should also ask if people sharing a public space through cmc are be threatened by private face-to-face communication. I could image that social pressures to seriously date someone at the age of 30 may cause someone to remove themselves from an online community in favor of a less rewarding or stimulating first-date.

    Rheingold worries that we may become “less friendly, less trusting and less prepared to cooperate with one another,” by treating the mechanical artifact as we do the other people. I’m more inclined to bring back Warschauer’s point that we cannot over-simplify this relationship. The same technologies that give us collective security may also threaten our privacy through collective surveillance. The new airport security machine that scans our bodies as well as bags may makes us feel secure, but perhaps not as much as bothered. We feel invaded by the person watching at the other end of the camera seeing us naked. There are a myriad of ways technology may be adopted and why. Whether in the form of fears or hopes, our reactions to technology seem to always be emotional in nature. Or is that just my emotive-interface obsession talking?

  • 15. jilblu  |  February 24th, 2007 at 6:20 pm

    I loved reading Landshear and Mason’s piece on call centers in contrast to Rheingold’s description of Foucault’s panopticon because it made me feel so hopeful. In Foucault’s vision of the panopticon, “discipline” or power/knowledge is completely one-sided, all of it in the hands of those with the power to watch. In the call center study, although the call monitoring and surveillance technology certainly made it possible for management to create a panopticon, it wasn’t in anyone’s interest to do so. Somehow, management understood that using the technology to control their employees would only result in low morale and poor customer service.

    Instead, both the agents and the management seemed to use the monitoring system in a benevolent fashion that benefited everyone. Agents used it track their own call rate, and saw it as providing corroborating evidence of their own reasonable actions in handling difficult customer calls. Managers encouraged agents instead of punishing them for low call rates, and they used recorded calls to help agents improve their skills. Furthermore, both call centers agreed that recording agents was too invasive, and they developed social norms around this: managers rarely recorded agents, and they usually let them know beforehand.

    What is hopeful to me about this study is that the possibility of Foucault’s discipline certainly existed here, but the management (consciously or unconsciously) seems to have decided that it was not worth foregoing a positive workplace environment in order to track and maximize agent efficiency. In exchange, agents took pride in providing good service, preferred being busy to doing nothing, and overstayed their shifts when the call stack was too long. Both sides valued positive work relationships, and adapted their use of the call monitoring technology so that it contributed to the general good will.

    In one of my previous careers, I designed online training for Convergys call center agents. Like Informationco, Convergys provided outsourced call center services to other companies. Their turnover rate was extremely high – over 100% every 6 months! – so they needed an efficient and effective way to train new agents. I’m not sure what the work environment was like at the call center, but one agent did tell me that they were encouraged to personalize their team work spaces by bringing things to hang above their desks – she seemed to enjoy doing this, and I got the sense it added to her feelings of camaraderie in the workplace. In addition, as part of their efforts to attract more agents, Convergys also started a “home agent” program aimed at stay-at-home mothers and retired workers. Although these were low-paid, short-term workers, the company did make some effort to add to their comfort.

    7eb

Leave a Comment

You must be logged in to post a comment.

Trackback this post  |  Subscribe to the comments via RSS Feed


0