INFO 202

Assignment 3 - Designing a Vocabulary - Feedback

Author(s):
Bob Glushko
glushko@ischool.berkeley.edu

Course: INFO 202
Date: 3 October 2008
Title: Assignment 3 - Designing a Vocabulary - Feedback
Summary feedback about the assignment. You'll get an email with a grade and specific comments about your work.

Summary

When I took a quick look at some of your assignments earlier this week, I was a little concerned because I thought I saw some problems with scope, the precision of definitions, the hierarchy/lack of hierarchy in your vocabularies, and in your facility with using the XML report format. But now that I've received careful evaluations from Shawna and Jonathan, and spent a day taking a very close look at your work, I am more pleased with how you did.

This assignment is designed it to make you confront the central challenges in designing a vocabulary. I hoped all of you would come to understand how critical it is to choose an appropriate scope (and hence, an intended user community) for your vocabulary and then recognize the tradeoffs imposed by that decision. In any vocabulary, there is an intricate balance between the material covered (breadth), the specificity of each descriptor (precision), and the number of descriptors available. This balance is often the result of negotiations within a team of designers and users, and many of of you described some of these "negotatiations" you had with yourself as you designed your vocabulary. The choices you have to make about what concepts to include, what words to use for them, and whether particular concepts/words are inside or outside your vocabulary are important but secondary to the choice of scope as it emerges from these negotiations.

Put a little differently, since all of you made a scoping decision without the benefit of any "negotiations" with anyone else, some of you chose a scope that was too broad or too narrow to describe easily as an "event." Most of you figured that out after a bit of iteration and struggle, and I hope what you learned from that was worth it.

I was very surprised to see that a majority took on the challenge of creating a DTD for your instance, and almost everyone who turned in a DTD turned in one that validated the instance. I am sure that you noticed, and some of you noted in your evaluations, that DTDs weren't able to enforce many of the rules you expressed in your definitions.

Overview of Grading

I prepared a very precise grading checklist for the TAs to use, and they were able to apply it with almost perfect consistency (their average grades differed by less than 1/10th of a point on a 10 point scale). But since I'm the professor I can overrule them even when they use the checklist I gave them, and in many cases I did that to give you a higher grade than they had calculated. This is because I tend to give credit for effort, even if it wasn't entirely succesful, and I also tend to give credit for "insight" or "creativity." Those are hard to define in general, but in this assignment that meant that choosing an unconventional or "odd" type of event to describe, I was favorably disposed toward your work.

The grading for this assignment is on a 10-point scale, with 1 extra credit possible for a DTD if the instance is valid. In addition to the step by step grading instructions for each question, we also evaluated whether they followed the instructions and turned in a report that used the report document type from last week. If they didn't, they get a 1-point deduction for not following instructions well enough to demonstrate that they did Assignment 2.

The average grade from the TAs was about 8.4. The average grade that I'm assigning is about 9.1. There were a handful of 11s, but a couple of them really stood out and I've asked for permission to make them available to you. If I get that permission, I'll send out a note to the class list so you can see what truly exceptional work looks like.

1. Scoping the Vocabulary -- Grading Instructions

1 point if the scope is reasonably clear.

2. Defining the Terms/Descriptors/Semantic Components -- Grading Instructions

4 points here for the vocabulary, and 2 for the definitions. It will probably be easier to notice things that aren't right than things that are right. Let's assume that everyone starts with a 4 on the vocabulary part, but can get 1 point deducted for each of these problems:

-1 if they don't focus on the terms for describing an "event" but get distracted and start including all kinds of "domain objects" like "ball," "bat", etc for sporting events, etc. for music and food that are in the domain but aren't part of an event description.

-1 if they have terms that vary in abstraction or hierarchy or granularity but they don't model it in a reasonable way; i.e., some parts of the vocabulary are modeled too precisely, and all the tags are in the same level when they should be in containers.

-1 if their model is not very consistent with their scoping statement; they say they are going to model a summary of a sporting event but they have a lot of "play by play" elements

For the definitions, if they are OK they get a 1. I can't imagine anyone getting a 0. To get a 2, you must capture any hierarchical structure and you have to had tried to do it in a way that isn't circular. Definitions of the form X is a Y where X and Y are synonyms aren't that useful, which I why I proposed the hypernym / hyponym template.

2. Defining the Terms/Descriptors/Semantic Components -- Comments

Most of you did OK on this, but there was great variation in the level of effort that you put into it. Perhaps I should have more strongly encouraged you to design vocabularies for novel events, because it seemed that the more "off the wall" or unexpected the vocabulary was, the better the work was because it was just more interesting for you to think about.

There were some common problems in your models that are worth pointing out. Several of you had a lot of semantic redundancy because you included intermediate results (for sporting events) that would enable you to calculate final scores, winners, losers, and summary statistics. If you have all of these things in a description, you run the risk of having integrity problems. Another common problem was uneven granularity where some parts of your model were very finely decomposed while others were relatively big blobs of text. And finally, a lot of you had elements whose content was essentially a set of values separated by white space or some delimiter, which aren't useful unless you parse them into their pieces.

3. Encoding an Instance -- Grading Instructions

1 if they create an instance. If they can't produce an XML instance and encode an instance in some other format... sorry, but they get a 0 because they had assignment 2 to work that out.

3. Encoding an Instance -- Comments

I told that I almost always use only elements when I encode a vocabulary in XML, and those of you who followed my advice had few problems. If you used attributes, were you trying to learn how they worked, or did you think they were better than elements? (In most cases they weren't because they contained content that you'd want to reveal if there wasn't any markup at all.) Some of you adopted my CamelCase rule for compound element names, but others were indiscriminate in using CamelCase or lowerCamelCase for element names and that's not good practice.

I looked at everyone's XML instance, and noted that some of you included comments to explain or record things you did.

4. Evaluating your Vocabulary -- Grading Instructions

2 points for this. It would be hard to believe that they didn't learn something by encoding, so they should have something to say about that. To get 2 points you have to address both WHAT you did and WHY you did it, as the instructions said. Reserve a 2 grade for insightful answers with some specific details; more people should get 1 on this than 2 because we need room to reward this.

4. Evaluating your Vocabulary -- Comments

Some of you didn't put much effort into this, but some of you really impressed me with your insight and candor about what you learned doing the assignment and how you revised your vocabulary or its encoding.

Some of you are lucky. Because I also looked at a lot of the XML instances for your reports, a couple of you got credit from me that you didn't get from the TAs because they looked at the HTML version you turned in, and you used comments in the XML to record your evaluations and revisions... AND THESE DON'T GET PASSED THROUGH TO THE HTML BY THE XSL TRANSFORM!

A couple of you took on the almost impossible task of inventing namespaces to partition your vocabulary into a set of vocabularies and then trying to specify that in your DTD. Unfortunately, namespaces were invented long after DTDs exist, and there just isn't a good way to tell them about namespaces. You can hack some attributes to simulate namepaces, and one of you did that, which took me a while to figure out because in 20 years of doing SGML and XML I'd never seen it before.

5. Creating a DTD (Optional) -- Grading Instructions

1 point extra credit if they did this and the instance is valid with respect to it.

5. Creating a DTD (Optional) -- Comments

Some of you have obvious expertise with XML and this extra credit was an easy target as a result. While I am pleased to see that some of you already know XML, I was more impressed by people who did it for the first time and made a DTD that would validate an instance. Congratulations.