2ca0 Coding for Policy, Regulating Design

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '2' LIMIT 1

Annie Antón, “Incorporating Privacy Values, Policies and Law in Information Systems”

Annie Antón from North Carolina State University’s Department of Computer Science spoke today in the TRUST Seminar on “Incorporating Privacy Values, Policies and Law in Information Systems”. Annie described a series of papers and projects centered around privacy policies and HIPPA privacy.

The first set of work she described involved analyzing privacy policies and a user study involving privacy policies. The set of questions motivating this work includes:

  • How do we ensure that a given privacy policy complies with law?
  • How do we ensure that system requirements comply with the policy?
  • How do we ensure that information handling adheres to policy and system requirements?

To get at the answers to these questions, their team first did a goal-based analysis of a set of privacy policies to pull out teleological goals, strategic goals and tactical goals. This involved a team of three people (a lawyer, a computer scientist and one other disciplinary perspective that escapes me) with a software tool that helps to parse out the goals embedded in the policy. They then used grounded theory to classify the goals into a taxonomy and finally iteratively refined this taxonomy to remove redundancies, etc.

The second study (forthcoming in IEEE Transactions on Engineering Management, 2007) was a user study that involved users reading various treatments of privacy policies and then having their perception and comprehension of the policies measured. They presented users with one of four treatments of privacy policies: the original policy from a website, a list of privacy goals and vulnerabilities, a categorical representation based on their taxonomy (see above) and finally the original policy enhanced such that hovering the cursor over highlighted pieces of the policy exposed the goals of that part of the policy.

Their findings are intriguing and statistically significant to p<0.001.

  • Users perceived brand X to best protect their personal information in the highlight case.
  • The question “privacy practices are explained thoroughly in the policy I read?” was most agreed with in the original language and highlights cases. (Apparently, the length of a policy leads to a perception of thoroughness.)
  • Average comprehension score was highest with categorical, then categorical, then list of goals and vulnerabilities… but, as above, perception was seen to be most secure with the longer policies (this is a paradox).
  • Users comprehend the original policies the least.
  • There was virtually no correlation in demographic factors, except for people 57+ in age who scored lower in comprehension.

There was more to Dr. Antón’s talk, including systematic analysis of HIPPA, that I’ll have to come back and describe another day in another post.

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '2' LIMIT 1

Values vs. Specifications

Something I’ve been thinking about lately is the “what is a value?” question. It’s clear from our discussions in class that things like privacy, security, transparency, etc. are the types of values that we’re most concerned with embedding in technology and policy. However, often the distinguishing characteristics of new gadgets are things like “faster”, “better”, “smaller”, “more storage”, etc. So, what is the difference between these things — let me call them specifications for lack of a better word — and our values?

Is it that something like security is just too high-level to be useful in the design stage? Do we have to break values down into specifications relevant for the thing we’re designing before we can embed the value? Are certain values almost impossible to specify in general? (like privacy, for example).

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '3' LIMIT 1

Medical data mining as a business model: issues?

(excerpt re-posted from feminist law-professors)

Deborah Peel, MD writes in an e-mail that is quoted here with permission:

“Fortune Magazine lauds one of the nation’s largest data miners of medical records, without any awareness that one major reason for the corporation’s success (revenue of $88 billion/year) is the illegal and unethical use of Americans’ medical and prescription records.

“Yes, they ‘wire the world’, but McKesson does so by ignoring strong state and federal laws and 2,400 years of medical ethics that require informed patient consent before medical records can be used, disclosed, or sold. Stronger state laws and medical ethics are supposed to trump the HIPAA Privacy Rule, which was intended to provide a ‘floor’ for privacy protections, not become the ‘ceiling’ for privacy. Instead, McKesson and the IT industry are ignoring state laws and medical ethics, because the unconscionable profits from selling medical data are irresistible.”

Interesting intersection here of regulations, standards, and a business with a lot of talented programmers working on networking all that data.. whose responsibility is it to protect the privacy of patients? Can there be anyone within a company who can have that position and be reliable? Is it possible to frame the concerns with patient privacy in terms of legal or technological problems or potential problems (in making a case to an employer, if the ethical ones, ahem, seem to be unconvincing)?

And is what happened that the company effectively re-interpreted the HIPAA Privacy Rule? Is that due to the way their systems incorporated data? (What I’m trying to get at here is the way that coding can subvert policy intention.)

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '3' LIMIT 1

Nat’l Federation of the Blind sues Oracle + state of Texas over application forms

The National Federation of the Blind has filed a lawsuit against Oracle Corp. and the state of Texas seeking to ensure that all applications used by the state government are accessible to blind state employees. The suit specifically cites Oracle PeopleSoft’s human resource applications, and names directors of the Texas Health and Human Services and Workforce Commissions, the state’s acting CTO, Brian Rawson, and Oracle as defendants.

full story here

This suggests something about how the designers or the company viewed the audience of users for their software, and possibly a problem with using averages or other statistics that may not take into account basic rights people (may) have under law. Designing for the most people when that still isn’t everyone (or at least isn’t constitutionally protected groups) may not be enough in some situations. It’s also interesting to think about whether each player (the agency, the designer, the company as a whole) assumed that another player would rightfully be responsible for dealing with that question.

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '3' LIMIT 1

Possibly interesting talk on friday

Beth Noveck (Stanford) “Designing Civic Software”
(Yahoo Research Berkeley Brain Jam)

Friday, March 2nd, 3pm
Yahoo! Research Berkeley
1950 University Ave., Berkeley.
http://upcoming.org/event/149223

We are witnessing the emergence of decentralized groups without formal organizations emerging to solve complex social problems and take action in the world together. In groups people can accomplish what they cannot do alone. New visual and social technologies are making it possible for people not only to create community but also to wield power and create rules to govern their own affairs. This presentation will focus on technology and the opportunity for collective action, particularly on the emerging frameworks — both technological and legal — for “collective visualization” which will profoundly reshape the ! ability of people to make decisions, own and dispose of assets, organize, protest, deliberate, dissent and resolve disputes together. By looking at several examples, including the design of “Peer to Patent” and the Visual Company (http://dotank.nyls.edu ). We will discuss the process of digital institution design that melds legal code and software code to address how institutions respond to the growth of networks. In so doing, we will not only address how technology is used in our democracy but how it might change what we ultimately come to define as democracy.

———————–Bio—————————————–
Beth Noveck is a professor of law and director of the Institute for Information Law and Policy at New York Law School and a visiting professor at the Department of Communication, Stanford. She also runs the Democracy Design Workshop, an interdisciplinary “do tank” dedicated to deepening democratic pra! ctice through technology design. Prof. Noveck teaches in the areas of e-government and e-democracy, intellectual property, innovation, and constitutional law. Her research and design work lie at the intersection of technology and civil liberties and are aimed at building digital democratic institutions through the application of both legal code and software code. She is the designer of online civic projects, including Community Patent Review: “Peer to Patent”, Unchat, Cairns and Democracy Island (see http://dotank.nyls.edu ) and is the author and editor of numerous books and articles, including the book series Ex Machina: Law, Technology and Society (NYU Press). She is the founder of the annual conference “The State of Play: Law & Virtual Worlds,” cosponsored by New York Law School, Harvard, and Yale Law School. Formerly a telecommunications and information technology lawyer practicing in New York City, Professor Noveck graduated from Harvard University and earned a J.D. from Yale Law School. After studying as a Rotary Foundation grad! uate fellow at Oxford University, she earned a doctorate at the University of Innsbruck with the support of a Fulbright. She (and her students) blog at http://cairns.typepad.com

2784

WordPress database error: [Table 'i290_21.wp_users' doesn't exist]
SELECT * FROM wp_users WHERE ID = '2' LIMIT 1

Policy Impact Assessments and Privacy Addenda

In this week’s readings, we read the CDT paper on policy impact assessments and Batya Friedman et al.’s piece on the Intel privacy addendum (unfortunately, that last one is published in DOC form). The common thread in these pieces is an attempt to instill policy-relevant values in a technical process–development in the case of Intel’s PLOS or standards-setting in the case of the IETF. It’s nice to see in both cases that they have put forth proposals to challenging problems.

I talk about each piece and note questions I had below the fold…

The CDT piece proposes that standard-setting bodies develop public policy impact assessments (PPIA) as a substitute for requiring public interest organization involvement (which is normally not sustainable). Specifically, these PPIAs would serve to break down proposed technologies or protocols into atomic pieces that could be evaluated for known impacts on public policy (e.g., “Does this technology expose information about an enduser to a third party?”). In my opinion, the benefits and limitations of PPIAs are laid out clearly and, in addition, the authors point to other important supporting roles and activites—dedicated policy staff, soliciting input from outside, promoting heightened sensitivity to policy issues—that will further the policy-consciousness of standards-setting bodies.

Questions for discussion:

  • Would it be a significant burden, during the design stages of your projects, to author a PPIA and then request comment from policy folks that you know?
  • PPIAs are formative tools, useful in the design stages of a technology; what kinds of summative tools–used to assess an existing technology–can you think of for public policy issues?

The Friedman piece is from the Ubicomp conference and discusses developing a privacy addendum for open source licenses that would specify default privacy architecture that would have to be carried on into derivative (modified) works under that license. Whereas the CDT piece was about general approaches to addressing public policy issues in technical settings, the Friedman piece talks about a specific case. Their challenge was to draft a legal document–an addendum to a software license–that would require downstream modifiers to honor some of the orginial design choices with respect to one value, privacy. To do this, they chose to require the software (or modifications of it) to provide informed consent as a way of detailing to the enduser what information is being broadcast about them. They also went the next step and developed a threat model that looked at specific privacy attacks and how their addendum could deal with them.

The one critique I have of this work is its use of a legal instrument to attempt to affect software design. That is, I’m just not sure if a value like privacy is as amenable to a licensing context as the typical license elements (exclusive rights under copyright law). Privacy policies are a good analogy. People don’t read them, rarely understand them and they can be contradictory.

Questions for discussion:

  • Will developers read and adhere to the privacy addendum?
  • If a developer does not and releases a privacy-invasive piece of software, will the original developers (e.g., Intel) sue to enforce the addendum?
  • What about all those non-compliant copies?
  • Addendums will only work with licensable technologies; what about hardware or things that aren’t strictly software (protocol designs, networking, etc.)?
  • Are there options outside of licensing (norms, market, architecture) that could be used by developers and modifiers that would allow open-sourcey innovation but also respect privacy considerations?

0