21d7 is203 - Social and Organizational Issues of Information » Week 3

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

Week 3

Jan. 30th: The Social Life of Information

Chapters 1 and 2 in Brown, John Seely and Paul Duguid. 2000. The Social Life of Information. Boston, MA: Harvard Business School Press.

Feb. 1st: Information in Organizations and the Workplace

Chapters 3 and 4 in Brown, John Seely and Paul Duguid. 2000. The Social Life of Information. Boston, MA: Harvard Business School Press.

January 2nd, 2007
posted by:

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

30 Comments Add your own

  • 1. yliu  |  January 27th, 2007 at 11:09 pm

    “The Social Life of Information” reads actually like “The Skeptic’s Guide to the Information Age”, but it does have a number of interesting insights. Six years later after its publication, some of these insights ring quite true and will probably remain so, while the rise of new information technologies and trends that are already answering the critiques thus posed.

    The crux of the authors’ arguments rests with the debunking of the miraculous, world-transforming properties of information, as powerful social forces are at work to preserve the existing order, and that these forces cannot be easily discounted by trite pronunciations like the 6-Ds. As “Enormous Oil” and the Voltron-like AT&T still are with us, stacks of papers still fill offices, and the concept of universities where students physically gather still seem as impregnable an institution as ever, it is quite obvious that information is not some overwhelming whirlwind of change.

    But I’d also posit that while information in of itself may not be such a panacea for social ills, the availability of information can serve to change social expectations, changing some of our fundamental assumptions about how things “should work.” The authors’ pronunciation in 2000 that “the small, agile firm with big ideas and little money is less likely to be the viable start-up of legend” and that “the garage door [has slammed] shut” proved exceedingly premature with the now legendary garage startup of Google. Further, the authors’ empirical examples for massification - AOL purchase of Netscape and the Microsoft venture into television settop boxes - have crashed against de-massifying forces of Firefox (open-source development) and DVRs (Linux in TiVo, MythTV, etc). The rise of blogs, peer-to-peer networks, distributed systems, etc has fundamentally reshaped some of our assumptions about technology and information sharing. Without information technologies and information itself made available via search engines, blogs, mailing lists, social networks, etc, these projects would have remained niche status or logistical impossibilities. The effect of information on society is far less than the miracles of “infologic” propaganda, but a lot more than the “myths” that the authors would claim.

    Their critique of software agents is one of the somewhat mediocre areas of discussion where some insights were hidden amidst much random musings. In many ways the authors seem more interested in putting up more anonymous “futurists” as strawmen and over-emphasizing technological weakness of software agents (which are in fact easily recitified, given time and processing power) in their critiques. Few people are really claiming agents as bona fide -replacements- for humans starting right now right here, because obviously AI is not close to replicating the cognitive abilities of humans. Everything from web spiders cataloging the web, to automated research assistants that watch you work and goes out to the network to search for relevant information about your current writing topic, these agents are intended to perform the mundane and tedious tasks that do not necessarily require the valuable cognitive abilities and time of a sentient human being. Further, it is important to note that intelligent agents != rule-based automatons. The rigidly “if-then-else” agent is a terribly weak and inaccurate representation of the capabilities of software agents (particularly those with learning algorithms of various kinds), and Mac OS 8/9’s Sherlock search predates the more accurate and socially in-tune algorithms like PageRank. PageRank in particular breaks away from the authors’ view of a lone agent making independent judgment of the things it sees, for it draws upon human social actions (linking to things) in a way that allows the agent to extract some semantic value and sense of relevance. Various recommendation engines can indeed find the “most interesting” things, customized for your own particular tastes. In the more socially aware web of the modern day (dare I say it…a Web 2.0?), it is conceivable that agents may actually draw upon real human experience to make their judgments.

    But the issue of trust is indeed a fundamental issue in software agent design and use, and will probably continue to be an issue. Even as a technological optimist, I wouldn’t trust it to actually negotiate for a product’s price and spend my money on my behalf. However, some people do, of course - the various automatic bidding assistants and auction snipers for eBay are just the first incarnation of these types of agents (take a few parameters from the user and goes out to work on its own, albeit on a very simple task). The authors’ insight on the perception of reliability, instrumental and moral, seems right on target.

    308b
  • 2. kesava  |  January 28th, 2007 at 1:31 am

    When Brown and Duguid started citing recent mergers of some mega companies as examples in the context of Downes and Mui’s argument of IT driving down the transaction costs, I was expecting them to explain the mergers in terms of similar costs. But instead they explain it as mega companies trying to protect their ecological niche. I see at least two transaction cost oriented explanation for the rise of large firm: 1. A lot of operational knowledge in services firms still remains tacit and is hard to capture as digital information. So transaction costs remain low only when the firm is large enough to incorporate various other stages in the whole services supply chain 2. Rise in importance of humugous amounts of data (e.g. Google). Its advantageous to be one big company and have all that data than be a little eco system of companies trying to use the same data.

    But on the other hand, when information has been commoditized enough, the trend seems to be an eco-system of companies rather than one large one (Brown and Duguid do not mention this). Companies like SAP for example, help spread knowledge about their products by offering certification. This helps in having an eco-system of consulting companies than SAP having to be one gigantic firm to do everything by itself.

    Yiming has neatly summarized a lot of the discussion about agents. I would like to expand on one of the lines he has written about agents and learning algorithms. Brown and Duguid talk about a lot of social concepts like negotiation, trust, representation etc in which bots would fail miserably. But we humans too fail often when we are kids. We trust almost anybody, we are not very good at negotiating (but we are very very good at reading expressions and body language) etc etc. But much of our later experiences get shaped by learning, and even statistical learning (some times resulting in prejudices like”Bearded and Brown skin? Reason to be suspicious!” ). We have started to realize power of some of these statistical learning techniques and that may be of some hope for more humane agents.

  • 3. mattchew  |  January 29th, 2007 at 10:11 pm

    I found the discussion of the relationship between work practice and work process congruent with my personal experience working in large institutions. Shiny new technology and process overhaul were the focus of reorganizations, with little to no thought given to how people actually perform the work. Often the changes introduced by such tinkering reduced employee productivity. Part of this was due to the inevitable disruption that accompanies process change as people learn the new ways. A larger factor was that as the new processes and systems were designed to meet the goals of upper management, there was minimal consideration or understanding of the workflow requirements of the employees actually doing the work. The end-result was confusing, user-hostile interfaces, the requirement to rekey information, and the need to perform extraneous actions to get the new system or process to accept or process input. Combine this with “empowerment” that delegated responsibility (blame) but not authority to workers, and it is no wonder that most employees viewed reorganizations like bad weather- something over which they have no influence that must be endured.

    Any business process is by nature an incomplete, bounded model of reality that can’t address all potential situations, no matter how complex the process flow diagram. Depending on the situation, a significant percentage, if not a majority, of work effort is spent on dealing with exceptions to processes. This is where the implicit aspects of work described by Brown & Duguid as collaboration, improvisation, and narration come into play. Such innate human processes that cannot be easily included in a flow-chart are the lubricant that allows messy reality to be reconciled with the parameters of business processes.

  • 4. celeste  |  January 30th, 2007 at 12:27 pm

    I think the interesting issue presented in this book is not whether bots are “never going to be good enough” or that “corporations are not going away”. The interesting issue is looking at how people perceive and predict the future: from a biased point of view. Each person project future on what they know, and often what they most want or are most frightened of. Brown & Duguid, whatever they are saying about the current state of technology, seem to be calling on people (perhaps strawmen futurists) to start taking into consideration a broader view when looking forward. That means a broader view when making predictions, or when trying to decide how to develop a product or technology in the immediate future. I think the fallacy they are pointing out is the same fallacy that trapped the telephone men, and that we can see parallels between the phone and the internet taking on more social roles, despite the roles prescribed by the original promoters.

    Since others have touched on bots, I will too. I think that bots have a great deal of potential, but they’ve laid out some good questions, even after you deal with the technical details. For example, trust, control and responsibility. If your bot represents you, who gets in trouble when it does something you didn’t intend for it to do? More AI doesn’t necessarily solve these problems, which seem to parallel issues with who is responisbile for the crimes your almost-adult child commits (for example, should or when should 16 year olds be tried as adults, and when as children?).

  • 5. matt earp  |  January 30th, 2007 at 12:34 pm

    Brown and Duguid’s discussion of bots lead me to thinking about an ongoing conversation I’ve been having in mcd, about the relative merits and effectiveness of internet radio, specifically comparing Pandora and Last.fm’s selection processes (and whether or not they work well). These processes are, at some level, algorithmically controlled, and both have at least a veneer of bot-dom about them, Pandora’s being especially conversationally when you’re starting out (much like Pandora’s CEO). I welcomed their pessimism about bot foiblelessness, the very thing that makes bots good at digesting massive quantities of information but massively infuriating (did any one have a contact with sprint in the era of “Claire”, Sprint’s auto-teller?) When applying bot-type choices to music selection to radio programming, as Pandora and Last.fm do, I feel, as do others I think, only a mild happiness when they get it write, and affronted-ness when they get it wrong. It is a real problem of how build better bots in that realm, and while I’m a proponent of human agency in the process, I feel that if the world DOES move forward constructing automated decision-making when it comes to entertainment, Brown and Duguid’s warnings about the lack of human foibles, the very thing that makes music selection unique and personal, should be headed. While I consider myself a technological optimist in some cases, I don’t trust designers and engineers to NOT take the easy route in designing these systems, and De Long and Froomkin’s observation that the desire for “frictionlessness might disable the ‘Invisible Hand’” should be headed. This is not because I necessarily believe in Smith’s conception of that hand (quite the opposite in fact), but because that it is such a well-embraced theory that underlies so much economic determinism these days, the fact that noted economists like Froomkin are concerned about the use of bots in the process shows that unchecked delegation of decisions could become a massive problem if it proceeds unquestioned.

    In a side note, I veered somewhat into the realm of meta-information analysis when reading these articles, and tried to think directly about where Brown and Duguid where drawing their ideas from. I noticed that no less that five authors cited are or were at least adjunct professors at UC Berkeley’s ISchool. They are Downes, Lyman, Wilensky, Lave, and Nunberg. I point this out neither as a good or bad thing in itself, merely something to be aware of when reading any text, the social/academic situation it was created in (I also think Brown and Duguid would expressly encourage this sort of questioning). As I grapple with questions of just what the ISchool is preparing me for, I hold those questions up against observations I’ve been making in another class I’m currently in, Nancy’s Social Life of Visual Media. In it we’re exploring the multi-faceted definition of new media, comparing the Howard Reingold’s teleological view of history as it applies the creation of technology vs. Foucault’s genealogical view. The temptation is to view all aspects of New Media as the logical culmination of a single strand of a particular history that came before. I have been made weary of this approach. In the same way, I worry somewhat that Brown and Duguid draw much of their approach from other professors that have been involved with this department. What conclusions can we draw about that? The happy one would be that many of the best people spend time at our department at some point in their career, but a more critical one might be that Brown and Duguid’s thoughts are happening in an echo-chamber. Thanks to the author’s very arguments concerning tunnel vision, I’m anxious to think critically about source citation in recent works as we move forward in examining the nexus of sociology and information.

    2862
  • 6. Sean_Carey  |  January 30th, 2007 at 2:00 pm

    In reading the first chapter, I found the statement “Chronic information shortages threatened work, education, research, innovation and economic decision making — whether at the level of government policy, business stagey or household shopping”(12) to be quite interesting. I think part of the issue lies in the information becoming obsolete once its acquired. It is just like computers. Once you buy it, the computer is out of date. This can be for two reasons, the first reason is that the information reflects a momentary look at a continuously moving body. For example, The Census is obsolete instantly because people and populations change. The other reason is information saturation. Once you learn everything there is to know about, say dinosaurs, you would get bored and move onto another topic. This second reason relates more to topics with a static amount of information. I think these two reasons help fuel the desire for information.

  • 7. kesava  |  January 30th, 2007 at 2:25 pm

    Reading Matt talk about foibles, last.fm and Pandora, I was reminded of both Robin Williams’ Bicentennial man. Its an everyday thing for us to accept human fallibleness, but when a bot errs….I think…”Why do you think I spent money on you? Are’nt you supposed to be good at this?”. Bots getting away with foibles is like Robin Williams fighting for acceptance in that movie.

  • 8. srini  |  January 30th, 2007 at 3:17 pm

    The title of the chapter ‘Limits to Information’ poses a question “Are there any limits to information ?”.The authors while trying to address the question terms the current information revolution as tsunami. But is there a sudden surge in information ? I rather feel that surge is in the accessibility of information and not in its availability. Information exist everywhere similar to atoms, but these latest representation of information as bits give us more
    control over it. The enormous growth in technology has changed the perspective of information. But is this abundance simply meaningless? We could find treasures inside this abundance, which would answer millions of questions. Todays major information retrieval systems like Google, Ask, MSN, etc and other natual language query processing systems like START makes an honest attempt towards it.

    The Information age has brought plethora of chages to our society, making things better and easier for us.Though some technologies like telegraph, typewriters,etc has been replaced with improved technologies, did the former completely die ? As human beings have some value associated to certain things. For example, Paper, Pencil, etc are very old technologies and we have lot of alternatives to it in the current world. But still can we imagine a paperless world now ? Why do we use papers in legal procedures even in this digital age ? Its very difficult to put an end to such things, because it is integerated
    with our society and culture. Does the information age bring the ‘endism’ to all the domains mentioned by the author? Rather, I would argue that it fosters the growth of certain domains like universities, government, mass media, etc. It empowers the universities to provide its researchers global reference and
    help the students with services such as e-library, etc. These technology aid the government in understanding the need of the people and take necessary action relatively quick.

    The author warns us about information technology’s deeper penetration into our life and ask us to look beyond information. I would agree with that, as we have certain social and moral values attached to our life, which can be under threat at some point of time. Privacy is one of the main recent concerns raised
    due to explosion of IT. For example customers online shopping is tracked which reveals lot of personal information about them, which may affect the privacy of the customer. Hence in my opinion, it should always be under certain check and we should make sure that we should drive the technology and not get driven by it.

  • 9. Bernt Wahl  |  January 31st, 2007 at 10:20 pm

    Predicting the Future

    In our Information Centric Age, technology developments possess the potential to create a great leap forward or a cataclysmic demise. The only inevitability is change itself. What happens is in the hands of those who control its capabilities.

    In The Social Life of Information the authors — John Seely Brown and Paul Duguid — point out, that technology is the tool by which forces exert change. 
 The direction of change is difficult to predict, it often follows an unforeseen path. Much like the path of a ‘stranger attractor’ in chaos theory, the slightest permutation can alter an intended outcome in a relatively short period of time.

    Ideas that once seemed obvious – mass transit or telecommuting, a paperless office, and the rise of teleconferencing have still materialized. Many of these ideas may eventually come to fruition, and some will be eclipsed by yet unforeseen innovations.

    Ironically it was Douglas Engelbart who’s ‘Engelbart Law’ stated that a successful technology’s adoption is generally over estimated in the beginning stages and is later underestimated as it is adapted. Dr. Engelbart and his team would create the vision of networked collaboration, Xerox PARC would make it work, Steve Jobs through Apple Computers would champion to the public and Microsoft would reap the financial rewards. As the old adage states, “Prediction is hard, especially about the future.”

    A vision that took a half-century to be realized:

    If the computer is the machine that changed the world, then Douglas C. Engelbart’s Augmentation Research Center at SRI (Stanford Research Institute) in Palo Alto, California transformed it into something the world could use. Using a combination of charisma, vision, organizational skills and shear determination, at a time when punch cards, vacuum tubes and teletype machines were synonymous with computing, he led his research group to pioneer computing devices that would help people collaborate. These mechanisms of computing would later be known as the mouse, multiple windows, email, hypertext and teleconferencing. Today it is hard to imagine the digital world without his creative influence, but when he first proposed them he was dissuaded from pursuing this research, both in his Ph.D. theses and later academic work. It was felt by colleagues — in the 1950’s and 1960’s — that these ‘wild ideas’ were unlikely to produce worthwhile applications, especially ones worthy of tenure at a major university like U.C. Berkeley where he was teaching.

  • 10. 295a Bernt Wahl  |  January 31st, 2007 at 10:24 pm

    Sorry some unwanted material was added. No way to edit it out.

    Bernt

  • 11. jerryye  |  January 31st, 2007 at 10:29 pm

    Once again we have a reading where the authors are ridiculing this or critiquing that, complaining that the pace of technological advancement is too fast or that the end result are tools that are mechanical and cold. Brown and Duguid complains about being dragged along for a ride by a myopic driver with tunnel vision. The technorati should stop and smell the roses they say.

    The book goes into numerous complaints about how technology can be impersonal, ineffective, and can never replace the human factor. What Brown and Duguid conveniently neglects to mention is how much more things can be accomplished due to the use of technologies such as bots or agents. Service today is no longer limited by how many qualified people a company can hire. Albeit the processes and interfaces are not perfected, countless hours of research and development are being poured into improving these tools. Unlike what the book suggests, the agents that the authors mentions are still very much a work in progress and are unimpressive to any computer scientist in the field. It isn’t creativity or empathy that is missing, the technologies that are needed for some of Brown and Duguid’s suggestions simply does not exist.

    It is interesting to note that a huge problem for tasks such as speech recognition or natural language processing is the shear amount of computation required. The field is seeing a renessiance today mainly due to huge increases in computational power. It is our goal oriented engineers and scientists that have improved compute power to the point where new algorithms can be taken advantage of. It is understandable that bystanders might get the impression that these engineers might have tunnel vision and yield useless gadgets. However, if people would just look 1 or 2 degrees of separation out, it might just be a small piece in the grand scheme of things. Like Rome, complex technologies are not built overnight.

  • 12. cvolz  |  February 1st, 2007 at 1:51 am

    What I found particularly interesting was the discussion of the Xerox copier repair reprsentatives and the way they went about solving problems that were not officially accounted for in their training manuals. What struck me about this was that just about every job requires a degree of creativity and the flaw of process oriented work is that the process can never account for every exigency. This is a reality that I think is becoming increasingly important as fewer and fewer jobs involve the rote performance of atomic tasks (that’s atomic as in individual and isolated not atomic as in nuclear, though that might be cool) and more and more active problem solving. There just aren’t many jobs for people who are content with rote task repetition. Quite possibly there never were. Progress and innovation necessarily involve the creative input and observation of people constantly bouncing ideas off each other, tinkering and experimenting. So, fortunately, while there may be many boring people out there, there are few true dullards.

    I also found interesting the mention of the subversion of processes. And it’s true, once a process is implemented workers immediately set about trying to get around it. It reminded me of an argument I had with a former manager where I said exactly that. He was, with good intentions, trying to smooth out some of the confusion and chaos that was occurring and part of that was to come up with a defined process for things. What I was trying to express then and maybe I could do a better job now was that the process being proposed centered around accomplishing tasks, but had very little to do with performing work. Most importantly it completely glossed over the actual interactions between the people involved and tried, instead, to dictate an outcome without understanding the particulars of how that outcome would actually happen.

  • 13. eunkyoung  |  February 1st, 2007 at 2:13 am

    Regarding to last lecture’s discussion about SCOT and software application, I was thinking about another example - Stem cell research and government’s regulation and religeous groups’ regist. The research is not finished, (well, in some country, government even supports this kind of research…) and we never know what will be the benefit or harm. However, various social groups worry about the result and/or the means and methods of getting there, and they ban on using certain materials of research. I think this case clearly shows how social groups lead, or exercise their influence over technology.

  • 14. elisa  |  February 1st, 2007 at 10:56 am

    Tunnel vision is bad, very bad indeed. Forgetting that we are part of a system which is more than the sum of it single components causes inconveniences at best, humanitarian disasters at worst (the development economics literature, to take an example outside technology, is case after case after case of nations wrecked because of some brilliant solution to their problems that worked perfectly in the modeling phase and then failed miserably in reality, because of the million variables that had been left out of the model and then turned out to be crucial).

    The real problem, though, is that the holistic vision that cures tunnel blindness doesn’t seem to be much help when one is trying to do something concrete, rather than just looking philosophically at a situation. In fact, holistic vision can bring very quickly to total paralysis. Brown and Duguid hint at this dilemma with the Xerox technical reps story and conclude that “organization have a heavy investment in routine behavior… (and) also need to improvise” – in other words, it’s a delicate balancing act, which is a fair description but not a useful prescription. Are there rules? Do you trust your guts? Do you try your best and then hope it works out? Do you prioritize some of the variables and then try to include them in your project (artifact, policy)?

    This, ultimately, is the bots’ problem: they cannot include the variables of the system they act in into their ‘decision-making’ in a flexible way. We have elaborated a set of behaviors, non-verbal clues and social conventions that allow us to consider the system as a whole, which, as B&D say, “reflect social and not simply individual activities.” Bots, on the other hand, ‘live’ in a context, but are not aware of it. I think that what B&D are really asking, when talking about bots, is ‘can technology and its development really overcome this?’ Will we get to a point where we can include a ‘IF none of the other behaviors I have coded into you work THEN make up your own mind and decide what’s the best course of action’?

    As a fan of science-fiction, I have no doubt we will (cylons, anyone?), but as a consumer of technology, rather than creator, my complaint is that we are sold a lot of technology as if we already are there. To answer the excellent points that Jerry makes about the critiques of engineers’ tunnel vision by observers who seem to ignore the technical constraints engineers have to work with, the problem is not really with the engineers, it’s with selling a product as ‘plug and play’ when what it usually means is ‘plug and then go play volleyball because it won’t work.’ And to this objection I myself would reply: engineers have nothing to do with it. It’s the marketing people, who have no idea of (or choose to ignore) what technology does and what are its limitations, but will always sell it in the most unrealistic way. See? Yet another case of tunnel vision…

    292e
  • 15. n8agrin  |  February 1st, 2007 at 11:27 am

    Brown and Duguid’s notion of lateral versus longitudinal social connections and their importance in business hits home, and it seems (to me, at least) that this realization has been embodied by more recent developments in community based software. Wikipedia is run completely by a community, which leverages a group’s lateral connections to compile up to the minute encyclopedic information. This is debatable, of course, because Wikipedia does have some type of a longitudinal structure in its use of editors. I would argue that their necessity is more based out of the inevitable malicious user, than an attempt at framing the procedure of sharing knowledge through Wikipedia in a specific procedure.

    Wikipedia also highlights the notion of ‘process versus practice’. That’s to say that the Wikipedia software does not define how an encyclopedic entry should be structured. Rather, as many I’m sure would argue, it’s power comes in providing a workspace allowing users to define the layout and information as necessary. This reflects an emphasis on promoting practice, not process, even though there are guidelines to writing an article. Subverting the process is the process here, where the tool provides the means for open ended creation as opposed to rigid boundaries and limits.

    It would seem that much of the web2.0 and open source movements have not been so much a focus on new technologies, but on how these technologies approach the end user. No longer do we talk about the task that needs to be fulfilled, in so much as we talk about the needs of a user, and how their needs might be addressed.

    I recently attended a talk by Schultze and Webb, two interaction designers who highlighted three concepts they felt paramount in product design; threshold, sport and friendship. Not, usefulness, cost and user base, or something more task or process related. Their notion of threshold is the moment between receiving a package and opening it in the mail, sport as the collaboration of a team working in concert, and friendship, …well does it need explanation? Their point was to highlight the need for social interaction as the products they consult on, mostly high tech, lifestyle products, create further opportunity to disconnect people physically. I would argue that another emphasis they concentrated on, customization, can be included in the practice ideology. Essentially they promote the user to meet their needs as they see fit, instead of, say, forcing them to use a cell phone the way a programmer has been instructed to design it.

    One issue that concerns me, which Brown and Duguid do bring up in Wired’s exploration of the design firm who attempts to create a hot desk environment. They highlight a seemingly necessary tension between having some framework and yet enough flexibility to allow users to create their own practice. At what point does this framework become too much of a process? I feel like I have simply oversimplified the notion of process versus practice which is why I’m confusing this dichotomy as a tension.

  • 16. zgillen  |  February 1st, 2007 at 11:40 am

    In reading the chapter “Home Alone”, there are several references to the social aspects of work and the workplace. This idea of workplace and the case study involving Chiat/Day reminded me of an SSME lecture last fall by Luke Hughes of Accenture Labs. Briefly, Luke demonstrated virtual offices employed in several locations across the country. At the Palo Alto office, they had cameras, microphones and speakers discreetly mounted in a hallway with large LCD monitors occupying an entire corner. The image projected was the hallway of the office in Chicago. Each respective monitor displayed the image of the other’s hallway. So, if a person in Chicago and another in Palo Alto happened to walk past these monitors at the same time, they could talk and interact in real-time just like any hallway conversation. The idea takes advantage of the social potential that occurs in these designated spaces. Anyone with experience working in an office environment understands the importance of hallway conversations. By creating the element of informality in a hallway encounter, Accenture is avoiding the different set of social formalities by making a telephone call, or sending emails. It’s obviously much easier to meet and interact with colleagues across the country by seeing them in the hallway, saying hello, and asking them about the Chicago Cubs! Would you ever call the VP of a company and have the same conversation? The interaction can continue in virtual offices should the conversation take a professional tone and relate to a project that needs ‘closed door’ discussion.

    The example of Chiat/Day demonstrates the social implications of creating a geographically egalitarian environment. Given the above example of creating a virtual office setting, will this create the necessary social environment independent of location? I would argue that creating this virtual office at home might solve the social issues presented by Brown and Duguid independent of the technological failures. Imagine sitting in your office at home and still be able to engage in impromptu conversations with people passing in Chicago, Palo Alto or New York. This technology has the potential to replace the physicality of the office while still maintaining the social construct.

  • 17. evynn  |  February 2nd, 2007 at 4:05 pm

    The big theme of this week’s readings seems to revolve around the idea that technology will simplify our lives and our world. I do think this is a perception that marketers of technology have pushed. It’s a very alluring promise that many people have bought into. Not only can technology make your life simpler through bots that pay your bills, 24 hour customer service, instant access to the people and services you need at any moment, it makes the world simpler. This second idea is even more powerful– and child-like than the first: With technology, the world is becoming smaller, our differences do not create such enormous, uncrossable gulfs. It’s easy to believe that connectivity in the technological sense equals connectivity in the social sense, and since so many of us are connected in the first way, we must also be connected in the second way. Without even going into the economic issues of the digital divide (who has access at all), this perspective does a lot to mask one very real effects of technology on social structures, how technology can also feed our divisions in very non-technological domains.

    Take for example, the ongoing and escalating debate in the world-wide Anglican church over the treatment of gays. Conservative congregations in the United States have actually “joined” the diocese of Nigeria, and both sides see the change as being enabled by internet technology. Or, take terrorism- would jihadists be nearly so effective in instilling fear if they weren’t for digital video and easy global distribution through the internet? As we discussed in class, even open systems are vulnerable (if that’s the right word) to emergent social structures, including power structures. The internet is as much a place to foment divisions that exist in the real world as it is a place to heal them.

    In the end though, this is not really a technological problem. The technology itself is amoral, even inert. We have the ability to use the force for good or evil, the throw in another sci-fi analogy. Brown and Duguid do a good job of calling attention to the weakness of the tunnel vision inherent in our idealized versions of the digital age, and they spend a lot of time talking about the effects on everyday interactions in businesses- social learning, work processes. So much of what networking has brought us is new conduits for social interaction. But, given how ingrained and crucial the subtleties of small-scale social interactions are, do we really have a good reason to think our big, macro ways of interacting as cultures and societies should change? I think it’s important for people, especially those who understand the technology, to be aware of the ways that technology mirrors and even intensifies pre-existing ways of both interacting with each other and isolating ourselves from each other. It’s a perspective that deserves broader examination: it’s not just the technology, it’s us.

    290c
  • 18. karenhsu  |  February 2nd, 2007 at 5:21 pm

    The notions of trust and trustworthiness concerning autonomous agents are particularly interesting to me. When a bot performs a task unexpectedly in a manner that resembles human deceit, Brown and Duguid pointedly ask, “Is this an inept or a corrupt agent?” and, even more interestingly, “Can you tell the difference?” This prompted a few questions of my own: What are the implications of botnets, collections of connected (infected) software bots that perform coordinated attacks? Moreover, bots have been known to perform both identity and personality theft — what kinds of legal ramifications are there for users with bots that act on their behalf but behave in ways that were unintended?

    Contrary to Business Week’s futurist prediction of a nearing “paperless office” phenom, I’d argue that the ubiquity of computers and digital media has instead reinforced the use of paper. Paper still provides an authority over digital documents and it carries with it the technological momentum of past glory (For example, COBOL is a high-level business-oriented programming language that, despite being antediluvian, is still widely used. Of any language, it probably has the most written lines of existing code… why would you re-write all of it even if a new, preferred, language comes along?).

    Though, if paper were to ever become obsolete, I imagine it to be replaced by something like this:

    http://www.lovethatfun.com/Sketching_Physics/886/

    Now, how fun would taking notes on that be?

  • 19. lawan  |  February 3rd, 2007 at 5:49 pm

    It is very interesting point of view comparing atoms and bits. However, from author’s point of view, it seems like tons of information we perceive in today’s IT world is a beginning of the end in many senses; for example, end of firms, organization, etc. I do not agree with this point. I see it as a real beginning of new era.
    Like atoms, in the old age, when human was not yet truly understand its properties and characteristic, we all suffered from severe virus and epidemic. Once we have more knowledge in this area, advance in bio-technology resulted in better quality of life, and thus consequently in human longevity.
    Likewise, I see today’s information technology (IT) as a tool for us to better understand information/knowledge’s facets and how it can be utilized. For instance, we can do business faster, as we can capture and process a lot more information in a shorter time. We can be better managed our time since less time is required in doing things in everyday life – using online bank instead of waiting in queue.
    More importantly, I see this as a challenging opportunity in discovery new knowledge in all areas – i.e. medical, bio-chemical, business, science, sociology. We are still so naive in information world. Though a lot more information can be captured these days, only small percentage can be fully utilized. For example, would it be possible to use information captured regarding patient’s health information predict and find the way to prevent new deceases? These issues still need IT as an important element to overcome it. So, don’t be skeptical of technology and how it would change the way we life, but see it as the attempts for the better quality of life. We are just in the real beginning step of an era of fully utilized knowledge around us.

  • 20. jimmy  |  February 4th, 2007 at 2:40 pm

    The agent technology is still far from maturity. Although many computer scientists are working hard on the field of “intelligent” agents, we have not yet seen complicated social behaviors blended into these agents. For instance, as mentioned in the reading, the negotiating bots are doing little more than matching and supply. It is difficult to add intricate human negotiations into multi-agent systems. And I wonder if we need such human-like agents that might cause many issues and problems.

    For now, the applications of software agents are mainly the automation of tedious jobs such as web crawling and data collecting from different platforms – not much intelligence is actually implemented on these bots. Computer scientists envision that agents can autonomously adapt themselves to the changing environments and make decision themselves. With the advance of computation power and the use of statistical learning methods, agents are more capable of making judgments and responding to the outer world based on past experiences. But does that mean we are about to embrace the new world of human-like agents? Can we replace human agency with software bots in the future?

    I would be excited if new technology could lead us to this phase. However, there are lots of issues we have to think about before we can rely on agents. How autonomous do we want our agents to be? What are the consequences of an agent’s wrong decision making? What if some agents look good but potentially malicious and disruptive? And who should be responsible to the agents that are out of control? Intelligent agents are like the black boxes that we don’t know exactly how they work and have to take the risk of unexpected situations. So I think even if we would have mature agent technology in the future, human intervention will still be necessary. For example, The OASIS system (Optimal Aircraft Sequencing using Intelligent Scheduling) is an air-traffic control system whose purpose is to assist an air-traffic controller in managing the flow of aircraft at an airport. The prototype system successfully completed operational tests at Sydney airport in April 1995. But it was at most used as an auxiliary system. When faced with the tradeoff between convenience and security, we would rather choose the latter.

  • 21. johnson  |  February 4th, 2007 at 4:25 pm

    These chapters made me realize how unobservant I’ve been in judging technology. All of my life I’ve assumed that all technology serves some useful purpose and that society should strive to come up with as much of it as the imagination can output. With the one simple example presented, it made me realize how wrong I’ve been. The author asked how many pieces of software I still keep today on my computer and to compare that with all the software I’ve installed in the past. It was enlightening to see what percentage actually proved useful in the long haul. It’s very true how our society deals with information. When we encounter a problem, we add more information to solve it. I see myself doing this at work where I’m asked to show different slices, different perspectives on the same set of data. Different groups view information differently and continually ask for more reports and websites to fit these needs. What we end up with is an overabundance of information that require much effort to sift through.

    Attempts to patch this leaking dam of information have also been a tough issue to tackle. The reengineering initiative that many companies underwent in attempts to make themselves more efficient were quickly thrown out the window. I agree with the interpretation that reengineering was just another way of saying the company was downsizing. These corrective procedures are so damaging to the company because it is difficult to correct the things people are used to. But what do these events tell us? It explains how serious of a problem information is becoming if we don’t start managing it well. From what I’ve gathered, information is a valuable resource but we must deal with it with a very stringent set of procedure; it can easily get out of hand and is difficult to resolve once it has reach the roots of a particular organization.

    2f73
  • 22. nfultz  |  February 4th, 2007 at 7:38 pm

    I actually was enjoying the Practice vs Process reading this week, but was dissapointed that
    the reading only really covered that tension wrt Ginormous Evil Companies. Do real coders actually
    worry about this stuff? Actually, yes.

    It would seem like an ISV could get away with relaxing on the process side. I know I’d rather code than write specs and collect requirements. If it weren’t for specs and style guides, I probably would have ended up a wtf long ago.

    There also doesn’t seem to be a clear line between process and practice at the small business level. The Joel Test looks like a pretty good list of best practices, but it also says things like “write a spec” and “fix bugs before adding features” that sound a process-y.

    Eric Sink kinda touches on this too. He recommends hiring neither sheep nor anarchists. Coders that get caught up with processes won’t get much done, but hackers that completely ignore it won’t be very productive, either.

  • 23. jilblu  |  February 4th, 2007 at 8:18 pm

    In “The Social Life of Information”, the authors discuss the “productivity paradox” which refers to decline of productivity growth despite huge investments in computers and new technology. According to the authors, this occurs because society needs time to transform from one way of doing things to another. I witnessed this loss of productivity first-hand. In the early 90’s, when I was starting out as an architect, the profession was in the midst of switching from hand drafting to using computer-aided drafting (CAD) programs. The change happened much sooner at larger firms, but by the early 90’s, even the smaller firms were switching to using computers.

    My first architectural job was at a 6-person firm, and we did all our drawings by hand. We used sheets of vellum or mylar, mechanical pencils, electric erasers, eraser shields, and stencils. There was one computer with a CAD program installed, but nobody knew the program well enough to use it efficiently. But some of my friends at larger firms used CAD, and I knew that I would have to learn. Eventually, I wanted a new job, so I taught myself AutoCAD after hours.

    At my second job, a firm of 25 people, some legacy projects were still hand-drafted, but all new projects were done using AutoCAD. At this firm, nobody over the age of 30 knew how to use AutoCAD, and those under 30 (myself included) either knew how or quickly scrambled to learn. All the drawing work was left to the more junior architects. The senior architects managed the projects, did hand sketches for a junior person to draft in CAD, and reviewed the sets of drawings.

    AutoCAD is a program with a very high learning curve. I would estimate that it takes somebody using the program around 6 months of full-time use to become as proficient a CAD drafter as she had originally been by hand. Because they rarely used the program, the senior architects were unable to make even the smallest change to any of the drawings. Even the smallest change required the help of a junior person. Instead of being able to pitch in during deadlines, the senior architects could only stand by and watch.

    The senior architects no longer fully understood all aspects of their working drawings. While they could review the printed drawings, they had no idea whether or not the files were correctly set up. With hand-drafted drawings, all that matters is that they look right. With CAD drawings, how the files are set up greatly impacts the extent to which they can be exchanged with consultants, how easy it is to make changes, how easy it is for a new drafter to continue the work. Figuring all this out fell to the junior architects, who had never done this before, and who only partially understood the practice of architecture. We spent a great deal of time in formal AutoCAD training, in talking to friends at other firms to learn how they did things, and in re-doing CAD work that was poorly done and needed to be redrawn.

  • 24. bindiya  |  February 4th, 2007 at 8:35 pm

    In the article “Tunneling ahead” it is interesting how there are drastically different viewpoints about new technologies being created in today’s information age. One view is that the tunnel design technology creates more problems than solutions and the other perspective is that tunnel design is that there is no better way to create new technology. My instant response to this is that there has to be some middle ground agreed on where one can be extremely focused while creating technology, but at the same time leverage on the different resources available. It is important to have a holistic view and consider past mistakes while looking ahead, creating better technology. Social factors and other context have to be considered while designing new technology. Thinking about the user while developing technology has never known to be more important than the current age we live in. On the other hand, people who criticize technology, blaming it to cause problems often don’t realize how much they themselves depend on it since most of the technology has invisible effect, at the same time making everything so convenient.

    In regard to the article on “bots”, it was intriguing to read about difference in the negotiation process between humans and agents. I don’t think we have reached a stage where agents can impersonate the way humans negotiate, simply because the way a human mind works is more complicated than simply following a set of rules. Humans often go by their instinct, have their own whims and fancies, and sometimes decide not to follow the rules. Software agents have a long way to go before they are “intelligent” enough to understand these nuances.

    It is fascinating for me to imagine a time when our lives will actually depend on these agents truly. Although our lives will be easier if most our work is delegated to agents who will be able to do it in a truly intelligent fashion, it will also make our lives that much more vulnerable.

  • 25. megha  |  February 4th, 2007 at 9:20 pm

    Before reading the chapter “Home alone” by Brown and Duguid, i was not aware of the concept “Hotdesking”. But, I have seen that many service-based IT companies (in India) have taken this approach for better utilizing their resources. As for example , if some employees are not assigned any project, then they do not get laptops and office cubicle .Rather, all they get is free internet browsing facilities in the common office library where they can access mails and read stuffs etc. This approach is not 100% hotdesking as done by Chiat in the sense that people wouldn’t have to change their location everyday as long as they have an assigned project.

    Hotdesking advocates claim that not just space, hotdesking improves intangible stuffs like staff attraction and retention. It also offers people more autonomy, they make quicker decisions and it empowers people.May be these claims are true to some extent. But it depends on the culture of a particular place and most importantly the business model to which it is applied.In India as for example, many people seem to change their job out of frustration when they are in this hotdesking period as they lose their sense of ownership.IT Service giant Infosys had to re-think and change this model due to high attrition rate.

    Chiat had to abandon hotdesking model after spectacular failures and i believe it was because of lack of enabling
    technologies back in the 1990s. Now wireless-based network and modern phone systems that can connects customers
    to you via one number are key enabler of hotdesking.It really makes sense to hotdesk. Most companies have unused desk space, created by people out on the road, on holiday or in meetings. So the number of desks and therefore the amount of office space needed is actually much less.But the working and business model of the company should be taken into consideration. If all the staffs are mobile most of the times, then obviously hotdesking will work wonderfully.But imagine hotdesking in a bank or adminstrative office where people do not travel 99% of the time.

    Even though frameworks like hotdesking, telecommuting are getting more and more common in organizations , i do not agree that these will obliterate the need for a social life in organizations.Working alone out of the home — or without a fixed space at the office — can be problematic. I feel that even seemingly casual interaction in a office environment can be of enormous help in getting work done. The input from colleagues, many of whom have likely faced similar problems, is especially helpful with the many minor issues that arise daily. The experience of others can readily be called upon, as it can not in an isolated setting.

    3211
  • 26. mcd  |  February 4th, 2007 at 9:50 pm

    The story of the Xerox repair technicians illustrates what I think is a key point of The Social Life of Information, and a principal source of its success: to paraphrase Nick Clayton, narratives answer. In their refutation to his criticism of SCOT, Bijker and Pinch defined a theory as an attempt to reconcile observed behavior, and, to my mind, not necessarily something deterministic. Though being predictive might be a cornerstone of sociological theories, it is not the sole source of worth in theories in general.

    The reps were given prescriptive training, which while valuable, was demonstrated to be incomplete. Much of their skill grew out of experience and sharing stories with their colleagues. Similarly, Brown and Duguid do not argue their points with data and predictive hypotheses, but with qualitative and convincing stories of the potential dangers of single-minded infoevangelism. While they are certainly fuzzier and broader than the conclusions of Janet Fulk, for example, I find Brown and Duguid’s points–and their methods–more useful.

    There seems to be something similar at work in Actor-Network Theory, though our introduction to it was brief. Regardless of the ability to model behavior or predict results, there is value in recognizing that social and environmental factors, as well as technology itself, shape technological development. Wondering how tires would have developed alongside a different type of pavement or whether humans would have developed flight in the absence of birds or Bernoulli is (at least moderately) interesting. Predictive modeling is beside the point. History and development are messy and imprecise, and it is possible for there to be more to learn from anecdotes and real-world observations than in highly controlled and focused quantitative studies.

  • 27. Ken-ichi  |  February 5th, 2007 at 5:21 pm

    I enjoyed what we’ve read of Brown & Duguid, often despite myself. I’m fairly sure I’m one of the “infoenthusiasts” they deride, and I generally take a reductionist view of reality and problem solving. That said, I think they do a lot to cut through the hype and the hand-waving that so often surrounds new technologies and their uses. I like the word “endism,” and I agree most of those afflicted with it fail to recognize the value (and tenacity) of existing social systems. What values play into this disregard? Scorn for human relations? Over-enthusiasm for technical “solutions”? Perhaps it is simply the tangibility of technologies that make them more obvious, so much easier to focus on.

    The essay on agents and bots seemed somewhat less relevant. The conception of computer applications as autonomous agents helping humans is pervasive in science fiction … of twenty years ago. They site web portals and Apple’s defunct Sherlock search results aggregator as examples of agents, and describe them as problematic because we assume they have human-like powers of negotiation and judgement, but I have never observed this to be the case among my computer savvy peers or my less savvy parents. My anecdotal evidence suggests that computer users ascribe little more agency to most computer applications than they do to a hammer. This gets a little fuzzy with sites like Google, where the tool is conflated with a corporate identity and the simple but goofy persona they portray with their seasonally appropriate logos. Anecdotal evidence is hardly evidence at all, though, so let’s look to the rock solid truth of cultural artifacts! As I mentioned, the science fiction of 20-30 years ago had Hal and R2D2, famously conscious automatons designed to help us that had demonstrably (occasionally catastrophically) human powers of social awareness. The vibrant science fiction of today tends toward near-future stories or worlds very firmly rooted in the physical trappings of the present day (or even the past), and for the most part does away with the idea of helper robots. The Cylons of Battlestar Galactica are more murky mirrors than faithful servants, and all of the tools in the show are just that: tools, without any expectations of agency. I think this is due largely to suspicion of such technologies and represents a current overriding “tool” conception of computer applications.

    The workplace essays were both interesting an valuable dissections of what, from our current vantage, seems obvious: working with other people is, in general, better than working completely alone.

    The overriding message of the book seems to be that even when new technologies enable new social processes, they never require it. The expectation of some new way of interacting with people isn’t just about having the technology to do, it’s about getting people to buy into it and allowing them to perform the tasks they used to at least as well as they could before.

  • 28. kko  |  February 6th, 2007 at 1:57 am

    After reading the first 2 chapters and the introduction to Social Life of Information, I couldn’t agree more with Brown and Duguid. The part I found most interesting were the two questions posed at the end of the merchant brokering section concerning both the product and merchant brokering: “Is this an inept or a corrupt agent?” “Can you tell the difference?”
    First concerning perfect information and a computer agent’s “ineptness,” my thoughts immediately point me to George Akerlof’s discussions about the imperfect information that inherently exists in market transactions. Until we have perfect information, a computer agent will not be perfectly adept and economics is going to play a large role in determining whether we decide to use a human or computer agent. People currently withhold large amounts of information regardless of one’s intent. Even those that do not wish to keep information from others cannot come close to publishing in a usuable form a complete set of relevant information (both economic preferences and assessment of “social fabric”) and react at real time accommodating constantly changing goals and preferences (what Lave discusses) using our current information technology. Economics tells us to choose the solution that provides more net utility. Currently, there are still an uncountable number of interactions that human agents can do with much more efficiency, providing higher net utility, that require a diverse set and inexact set of information such as introducing one person to another in a social situation.
    As information technologies grow stronger, it does not seem clear to me that the number of human agents will decrease. I take the same position as Brown and Duguid as it seems fairly certain that technology can only complement the work of human agents. As technologies become more powerful, drawing information from a larger pool and processing more information in less time, I believe that we shall see many more basic activities replaced by technologies. Even with such a complex example as introducing one person to another, it is possible that with more technology and information, we can systematize many basic information exchanges in introductions such as what job people have, where people live, what they like to do, favorite books, food, etc. (social networking sites have already started this, but with questionable accountability), but it will still take a long while for technology to replace the value added by the human agent in areas like immediate trust transfer (credibility), presentation of highly subtle commonalities (personality matches, behavioral similarities, etc) and many less concrete, easily quantifiable information (instincts, feelings, motivations, intentions, etc). After surpassing these challenges, the technology would need to update its information at real time (an even more challenging task) before computers can fully replace human agenting in social introductions.
    The second question: “can you tell the difference?” addresses an issue of trust. The common agenting example as discussed in the representatives section of money managers and financial tools is still a strange, but consistent phenomenon, as wealthy individuals and large pools of capital consistently buy into hedge funds and money managers. It seems easier for people to trust another person than it is a machine. Following the line of trust, the wealthy individual places his trust on the money manager and the money manager places his trust on quantitative trading software and the developers of its software trusts historical statistics and patterns. This is interesting to me because I always question: How accurate are these patterns? With all these levels of abstractions (in this specific software and also for technology applications in general), how exactly is sufficient trust established? Again, with all the levels of abstraction, can we even tell the difference between an inept money manager from a corrupt one that just wants to collect fees?
    In the end, at our current state of technology, I think it is inherently easier to answer these questions with human agents doing the negotiations. From my experience and training, the general practice for a good, optimal negotiation includes two major parts. The first and more important part (usually the overlooked part) is that the negotiating partners build up the maximum amount of mutual value (build up the size of the pie). Often times, the process of negotiation between agent and client will uncover mutual interest. In the Amazon example, this is the part where recommendations are made that actually suit the buyer well, matching supply and demand. The ability to do this is a measure of adeptness/ineptness. It is then having built up the size of the pie, do agents take on the zero sum game work of taking a larger piece of the pie, the measure of corruption. In the Amazon example, this is the part where Amazon may try to offer items with higher margins or ones that are sponsored. For most people, it is impossible to tell or judge the difference unless they do the negotiations themselves. Until technology gets to a certain level of transparency to the consumer (involves enough information transparency, sufficient credibility behind that info, and consumer education), where they feel that they made a good judgment on the difference between the ineptness and corruptness, I don’t think humans/consumers will be able to fully adopt and trust computer agents in the same way we have adopted and trusted human agents doing similar tasks in the past.

    1acf
  • 29. jess  |  February 6th, 2007 at 8:03 am

    Anecdotes and disclaimers – they are my favorite things to read. Last week’s reading by John Seely Brown and Paul Duguid was full of such favorite things.

    Disclaimers allow writers to present new perspectives without jeopardizing their credibility. We discussed this point briefly in class – while the authors appreciate and encourage new technologies, they demonstrate an opposing and not necessarily popular point of view: that changes due to new technologies can have consequences that need to be considered ahead of time. Without knowing that their balanced perspective we would automatically label them as “unable to handle change” or “opposed to new technology.” Therefore, disclaimers not only permit new perspectives, but also they display a balanced perspective. And people appreciate balance; I certainly do.

    Anecdotes allow writers to prove their point and remain interesting. I really enjoyed reading about the work experience at Chiat/Day (I wouldn’t be saying this if I had actually worked at Chiat/Day.) And because I found it so enjoyable to read, instead of smiling and moving on, I started to think about some of the underlying problems with the practice of hot desking. I also started to ask questions: “what could have been done to make hot desking a success?” or “what additional workplace phenomena positively or negatively affected Chiat/Day’s experience?” or “what disruptive changes have I experienced in the workplace? how were these situations improved?” or “what technologies would improve the Chiat/Day experience? what technologies would make it more disruptive?” Therefore, not only are anecdotes fascinating to read and are effective in proving a point, but they also get readers to really think about your point.

    In the future I will probably include my own disclaimers and anecdotes in these blog postings. But in this post, I’ll simply appreciate effective and interesting writing. I know that commenting on writing style is not the purpose of the blog post. However, this writing is so well done that it warranted this posting.

  • 30. daniela  |  February 11th, 2007 at 10:26 am

    I know it’s cheating, but I was out of commission last week and still wanted to give my belated two cents. I thought information was being parented like a teenager in Brown and Duguid’s account from 2000. The “social like of information” is filled with stories of rebellion and distrust: innovations thrusting their weight against contemporary social needs; society’s reactions to corrupt agents; the rise of social unknowns. Agents were even being tended to like teenagers. Society sits back and prays for the best. A parent may find it difficult to find common ground because of generational or age difference, when changes in brain chemistry are more critically at play. Similarly, companies attempt to control the use of technologies unsuccessfully using the “Do it this way because I say so” methodology. Companies constantly fight a battle against innovations whose moves they cannot predict. So how can we understand our own creations? If they rebel, we must react. Society has matured along with its information since Brown and Duguid published their book. We no longer require our IT home past eleven, constraining it to the arbitrary categories “computer,” “agent” or “bot.” We’ve instead begun study the influences and affects of it maturation. I’m proud.

Leave a Comment

You must be logged in to post a comment.

Trackback this post  |  Subscribe to the comments via RSS Feed


0