28e8 is203 - Social and Organizational Issues of Information » Week 8

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

Week 8

Mar. 6th: Identifying, Justifying and Presenting Problems

Chapters 4-5 in Creswell, John. 2003. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, CA: Sage Publications.

Mar. 8th: Choosing Research Methods to Fit the Problem

Dooley, David. 2000. “The Logic of Social Research: ruling out rival hypotheses.” in Social Science Research Methods, edited by D. Dooley: Prentice Hall.

Chapters 1 and 6 in Creswell, John. 2003. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, CA: Sage Publications.

January 2nd, 2007
posted by:

WordPress database error: [Table 'i203.is203_users' doesn't exist]
SELECT * FROM is203_users WHERE ID = '1' LIMIT 1

15 Comments Add your own

  • 1. yliu  |  March 4th, 2007 at 12:26 am

    The discussion of various ways to write a research study/proposal for a research study is very informative. I liked the mad-lib sentence skeletons for writing various parts of research papers. I’m all in favor of having a standard template library for various types of papers and just plugging in our assertions, results, and conclusions at writing time :-)

    I find it interesting that some would not adopt research pragmatism - explaining or researching a problem with whatever method most appropriate. Are there really people who insist on such a theoretical purity that they’d forsake results in favor of having philosophical debates about the nature of truth, knowledge, and reality? Do these people actually exist outside of philosophy departments? Though I do see that a bias may exist in some cases. Coming in from an engineering background, it seems that quantitative methods are considered “rigorous”, with all the great connotations thereof. The implication, of course, is that the rest are “not rigorous”, and all the negative connotations thereof as well. However, in some cases, rigorous experimental studies aren’t desirable due to ethical or logistical concerns.

    A small reaction to the philosophy of science discussion. Science is a skeptical field of inquiry, that much is certain. But if we proceed beyond the line of “I see a tree” -> “I have an inner visual sensation consistent with what I have learned is called a tree”, then perhaps we’d wandered too far away from the purpose of science - to understand and explain what we perceive of the physical and natural world. If we are skeptical to the extent that we doubt reality itself, then the entire exercise becomes a rather meaningless affair.

    “Whether your skepticism be as absolute and sincere as you pretend, we shall learn bye and bye, when the company breaks up. We shall then see, whether you go out at the door or the window, and whether you really doubt, if your body has gravity, or can be injured by its fall, according to popular opinion, derived from our fallacious senses and more fallacious experience.” Not that I support Cleanthes’ position on anything else, but this (admittedly out of context) does seem appropriate.

  • 2. daniela  |  March 4th, 2007 at 8:05 pm

    I liked exploring the second half of last weeks’ readings in relation to this week’s Creswell readings. Nass employed similar approaches to those of Kiesler & Sproull to tackle overlapping yet distinct questions, but each group’s studies lead to different conclusions. So how did similar knowledge claims, strategies of inquiry, and methods, in Crewwell’s terms, produce conflicting results?

    Both groups seemed to use a postpositive approach in investigating and forming knowledge. Nass and Kiesler & Spoull similarly constructed inferences, ran experiments to investigate their questions, and rationally analyzed the results. Yet Nass’ results suggested people report correlating human emotion with computers, while Kiesler & Spoull’s studies found that these reports do not imply a true correlation – they may be no different than people’s reports with any other inanimate object. Both groups used quasi-experiments, a quantitative approach, to investigate their questions. More than their choice of design frameworks, their choice in the scope of their research questions seemed to determine significantly different results. It could be useful to expand Creswell’s design framework to include other choices in the construction of knowledge claims, such as determining problem granularity or scope.

  • 3. Sean_Carey  |  March 5th, 2007 at 11:57 pm

    I found the readings to be interesting. The whole chapter of setting up an intriguing introduction is similar to what I learned in undergraduate english classes. This article, though, really makes the differences between Qualitative, Quantitative and mixed methods introductions clear. I find the deficiencies model an interesting and useful model for an introduction.

    Creswell’s use of articles as examples helps illustrate the point for me. It’s one thing to just explain how to write a good introduction hook. But to also show examples of good hooks and explain why they’re good hooks is really helpful, especially for someone who does not write many research papers.

  • 4. zgillen  |  2d96 March 7th, 2007 at 6:15 pm

    I observed some interesting differences between Creswell and Dooley in the readings this week. The Creswell readings seemed focused on the process of creating good scientific writing (Catching the reader’s interest, focusing on specific phrases to underline the purpose, etc). Where Dooley examines the difficulties of making any claim actually ‘stick’ in the scientific community. After reading Dooley, I became a little disheartened that even with strict adherence to methods, research and process, you can still be doubted and potentially
    ridiculed in the professional and academic world for publishing papers on social research. Especially frightening are those papers that address important public policy. I guess that Dooley emphasizes “Ruling out the rival hypothesis”, or the notion that you must consider every other factor of causation when establishing the study or discussing your results. Even conducting a study that addresses all known rival hypotheses does not mean that five years later one will surface and challenge your arguments. The paper does address the fact that there will always be disagreement between experts, and ultimately the reader needs to observe the facts and make a decision.

    Studies that are released by the media tend to have many rival hypotheses associated because they are often small in scope and sensationalized. Last week, the AP news wire published a story with the following title “Patients favor doctors that use an electronic health record.” The study involved a large consulting firm which interviewed 600 consumers and 100 doctors. The “consumers” were randomly selected and the physicians were interviewed by phone. This was the extent of the information that was released regarding the methods of the study. Obviously, there are gaping holes in the specifics of the data that enable construction of rival hypotheses. Some examples of the alternatives here are; “Consumers favor technology in healthcare”, or “Patients are unaware of the use of technology in providing healthcare”, etc.

    Referring back to Tuesday’s lecture, these sensationalized news stories might not obey strict research standards, but often they are based on some collected data. This data might not be a reasonable sampling size (600 out of 300 million Americans is rather small), although it’s collected information nonetheless. This could be a form of inductive logic based on very broad constructs that propose a theory. While this study could be a ploy for continued investment in healthcare consulting, this might lead to further deductive research to provide more accurate data to support or disprove the theory. I think many people are aware of the shortcomings of mainstream, cursory studies, however I think they often lead to further and deeper exploration in academic research.

  • 5. Sean_Carey  |  March 8th, 2007 at 9:58 am

    I really liked Dooley’s “Logic of social Research: ruling out rival hypotheses” It paints a good picture as to why we should be skeptical about science. Good science is not just willing to reject your hypothesis in favor of the evidence, but to work and disprove your hypothesis. This raises ethical questions about scientists, because many will just research to prove their point and not look to find counter evidence and disprove their hypothesis. I agree with Dooley’s argument about consumers of research should be skeptical. We have to be skeptical because our brains are so good at association. I was surprised that Dooley didn’t mention Plato’s Cave of Shadows. The Cave of Shadows allegory argues that we understand the world through the limits of our senses. It fits well with Dooley’s argument. Sometimes our senses and patterns fool us in understanding the world around us, which is why we need to be skeptics.

  • 6. elisa  |  March 8th, 2007 at 11:16 am

    I am very interested in the issue of quantitative versus qualitative research methods, and by the fact that social sciences, even when they utilize quantitative methods, are not predictive. As I asked about where economics fit in all this, I was thinking of two economists who are big on quantitative methods, and partial to predictions, or at least so it seems to my superficial reading: Paul Krugman and Steven D. Levitt.

    A while ago I read Krugman’s short essay Models and Metaphors for the ICT for development class (recommended if you’re interested in mathematical models, short&sweet). His main point is that models are the fundamental linchpin of sound research methodologies and the only legitimate way to deal with complex systems. Krugman’s argument is very intriguing: models are simplifications of what we want to analyze, and in this sense a necessary “falsification” that lets us deal with systems that are too complex to reproduce. Creating a model entails choices of variables, judgment calls and compromises that are often subjective, but a good model “is a good model if it succeeds in explaining or rationalizing some of what you see in the world in a way that you might not have expected.” To make his point, Krugman proposes the example of Fultz’s dishpan, that is a crude physical model that reproduced two features of the global weather system, and resulted in an accurate reproduction of essential elements of weather behavior that allowed important insights into it.

    One could say that a qualitative (or, as Krugman prefers, narrative) analysis of a system also gives us new insights into it. True, but “The models, unlike a purely verbal exposition, reveal the sensitivity of the conclusions to the assumptions (…) A model makes one want to go out and start measuring, to see whether it looks at all likely in practice, whereas a merely rhetorical presentation gives one a false feeling of security in one’s understanding.” People who avoid mathematical models in favor of narrative explanations also simplify reality, but chose to do so through the use of metaphors rather than mathematical models. “And metaphor is, of course, a kind of heuristic modeling technique”. In other words, they also use models, but in an undisciplined, inconsistent way, that is not formalized, and therefore cannot be measured.

    It seems to me that social sciences are embracing more and more quantitative research in part out of a feeling that the lack of ‘hard science’ behind qualitative research methods leaves them open to accusations of partiality and inconsistency. Is this the case? Or is it just a logical evolution of the discipline? Do we implicitly trust numbers more than words because, after all, numbers are numbers are numbers, i.e. neutral? (although the sociologist Jasanoff – I think she’s a sociologist - would say: numbers may be neutral, but counting certainly isn’t)? And aren’t hard data supposed to push us towards a predictive model? (by the way, re. the discussion we had in class about the stock market crash, Krugman would agree that economic models didn’t predict it because they are not sophisticated enough yet, but they are getting there; for now, they are concerned with the long-term trends of stock markets, and not with performances in the short-term or related to single companies). After all, Fultz’s dishpan is used to understand weather patterns so that they can be predicted. What I have been wondering since reading the essay, though, is: can human behaviors be considered a variable just like physical phenomena? Krugman thinks so: “Homo economicus is an implausible caricature, but a highly productive one.” I don’t know. I am attached to the idea of human uniqueness, but I am beginning to suspect more and more that humans are much more measurable and predictable than we’d like to admit.

    Finally, three quick thoughts.

    I mentioned Steven Levitt, economist, author of Freakonomics: good example of ‘value laden’ quantitative research. He certainly takes very seriously his duty of skepticism as a researcher, but at the same time his “look, the numbers say so, so it is true” seems to contradict that same skepticism. Can one be skeptical and positivist at the same time?

    Creswell: I don’t have any enthusiasm toward his template for writing papers. I think what he writes is useful to clarify the points one should address/plan/think about when writing a paper, but a template! The very (recent) Anglo-Saxon model of 1.introduction where I’m going to give you a summary of the paper; 2. body of the paper where I write the paper expanding the points I summarized in the introduction; 3. conclusion where I reiterate the points that I explored in the body and introduced in the introduction; can be depressing (if very useful for the student in a hurry). It is certainly useful for certain topics, but if considered as the only way to write papers (as Creswell implies it should be), it encourages laziness in the writer and mental torpor in the reader. There are other, equally if not more engaging ways of writing, that Creswell completely ignores. Mix and match! Surprise yourself and the reader! Creativity is not incompatible with rigorous scholarship, as many scientists show in their writings. But maybe I misunderstand the audience Creswell has in mind (reading the sentence “Researchers often use the present or past verb tense in journal articles and dissertations, and the future tense in proposals because they are presenting a plan for a study” is what makes me think so; surely people who have arrived at university, write papers and do research should know the difference between past and future tense?)

    And finally, per Creswell categories of researchers, I am a socially constructed knowledge theory fan with a deep envy toward postpositivists because they have cool graphics and formulas (it’s all that Krugman reading that corrupted me), which means that after all I am a pragmatist, which sounds like the category for those who don’t want to commit. Dante would kick me to hell with the opportunists (who are actually not even in hell proper, and their punishment is to run after a banner forever, perpetually stung by wasps and with maggots drinking their blood and tears. Ouch!)

    2414
  • 7. jess  |  March 8th, 2007 at 12:12 pm

    This blog post is in response to Tuesday’s discussion about whether social science models in general can be predictive. While social science models have the potential to be predictive, in reality they are perceptive models. This is because these models are inherently imperfect, they are studies of people (individuals, organizations, societies) and people are unpredictable.

    On Tuesday we used the stock market in our discussion of how economic models are not predictive models. However, I think other economic activities can better explain this view point. This is because stock market participants include divergent contributors with varied goals such as pension funds, smaller investors, large hedge-funds and often involve decisions by individuals without backgrounds in financial economics.

    Therefore, a more effective example would be to demonstrate that the decisions made by economists (who understand and utilize these perceptive models) are also unable to predict the outcomes of many scenarios when using perceptive models. Take the Federal Reserve (the Fed) for example. The 12 big decision makers of the Fed meet eight times a year to analyze the state of the economy and take the appropriate action to promote growth and keep prices stable. Although these decisions are made by using perceptive models that consult the past experiences and actions of the Fed, they often do not produce their intended outcome. This is because they are based on a perfect model where people act as predicted. Therefore, when the Fed chairman, Ben Bernanke, announces the Fed’s decision, he has to also take reactions by individuals in account. Often he can make only small changes to monetary policy (rather than making changes according to the perceptive model) because if he raises interest rates too much people will panic but if he raises them too little people will also panic. Essentially, humans are unpredictable. If humans acted the same in every circumstance the Fed’s ideal policy could potentially achieve its desired effect, but in reality it cannot, simply because people are involved.

  • 8. karenhsu  |  March 9th, 2007 at 1:36 pm

    Reading Creswell’s introduction design paper for qualitative, quantitative, and mixed methods produced mixed feelings from me. At start, his listed best practices that emphasize the importance of such introduction elements as narrative hooks, audience identification, problem statement, and explanations of what’s lacking with current research all seemed clearly intuitive to me. Looking back, however, I guess maybe you can attribute this to hindsight bias. I remember in writing my first research paper, our introduction was first casually written, and then went through multiple revisions after the meat of the paper was established. Would it be a good strategy to save the introduction for last? Would this even matter? For that particular paper, we unwittingly followed through most of Creswell’s suggestions – we targeted our intended readers, we carefully cited previous work as well as differentiated them from our own by pointing out their deficiencies in the particular problem we wanted to address, etc. In any case, it’s nice to see a good template for writing a solid research paper introduction in a more formalized way. Perhaps now we’ll all have similarly structured introductions for our final 203 paper.

  • 9. evynn  |  March 9th, 2007 at 5:07 pm

    The thing that struck me about Creswell’s description of qualitative vs. quantitative research methods is that they are not only complementary, but they always have been. Though it is only recently that academics have begun to explicitly develop and recognize qualitative methods, the bottom up, fact-and-impression gathering approach is the basis for most academic progress. People observe constantly, and constantly build theories. This brings up that exasperating epistemological question of where those observations start, and I’ll thank myself not to address it in detail. My theme is much simpler: that qualitative research, observing, is much more difficult and pervasive than some of Creswell’s examples suggest.

    The process of making good observations is in many ways much more detailed and methodical than it looks. Whatever the subject area, it involves some preconceptions of what you are going to find– indeed, for every question, however open-ended it appears, the asker has some idea of the set of natural, reasonable answers, and usually the answerer has a similar set of answers in mind for that type of question. This is quite basic semantic theory; questions that are too broad (“Why?” or “Where?” with no context), that don’t suggest a set of answers, are “infelicitous,” and suggest some failure in the discourse. In other words, grammatical and semantic mechanisms determine the kinds of answers and observations a given question will elicit.

    Creswell notes that preconceptions are formed by the researcher’s context, and that qualitative researchers try to acknowledge these. But, considering the basic nature of questions as presupposing answers, the context is broader and more subtle than his discussion suggests. The greatest challenge of gathering good observations is to gather enough of the right kind of data to detect patterns that might suggest a theory. The “right kind of data” is never going to be everything about a given domain; to expect otherwise would condemn the researcher to a lifetime of obsessive study and, ultimately, disappointment. As a result, researchers make decisions about scope, about which questions to ask and which data to gather, and these questions presuppose a set of answers. This means it is not only important to consider cultural and social context when doing qualitative research, but also other ways of formulating the questions we ask and what they would imply about the possible set of answers or data we will gather as a result.

  • 10. cvolz  |  March 10th, 2007 at 10:26 pm

    This was an interesting set of readings for the week, though it struck me as being very pragmatic and thus kind of difficult to think too much on.

    The Dooley piece was somewhat more engaging in that he talked about the various problems with induction, and how to differentiate, test and evaluate various claims. And while I know this is purely a personal preference, I would’ve preferred a more in-depth discussion of the philosophy of science, but that’s just me liking philosophy.

    23be
  • 11. nfultz  |  March 11th, 2007 at 10:41 pm

    I got a peek at how the other half lives this week. I’m firmly in the quant camp, and I understand that qual studies can be potentially more in-depth for less time and money. But some of the qual reading made me cringe anyway, specifically the procedures from chapter 10:

    *Purposefully selected subjects
    In other words, introduce a selection bias on purpose? I understand that qual researchers might not have the time or money to run a couple dozen interviews over a random sample, and this might seem like an attractive way to get to the heart of a problem, but this would seriously damage your ability to generalize any of your findings from the individual to the population. How do I know you didn’t pick a few outliers just to prove your point? This feels like cheating.

    *Open ended interviews
    If you do a few interviews, but you don’t ask the subjects the same questions, how can you meaningfully compare the responses? These can’t be considered valid measurements.

    This isn’t to say that the findings are wrong, but this looks like qual researchers are taking shortcuts. That’s fine, I guess, but the paper won’t persuade me as much as the other one that has t-scores to back it up.

  • 12. bindiya  |  March 12th, 2007 at 12:55 am

    It was extremely interesting to read this week’s readings, especially since it left with me a perspective towards writing which I never possessed, coming from a Computer Science background. Creswell’s Chapter 4 and 5 had reliable templates for all different types of papers. It was interesting to read about how significant it is to identify the research problem, so that the user doesn’t get confused between research questions and the research problem. It was also very insightful to read about how the audience of a study is so critical. Overall the readings were fun to read and very knowledgeable.

  • 13. eunkyoung  |  March 12th, 2007 at 1:47 am

    This topic is very much related with Coye’s other class “Quantitative Research Methods”. I like this topic because it provides basic concept of how you can model ambiguous social state. The relationship between people bringing laptop to the class and their class participation, or student performance is a frequently used example. Some people may assume that there might be some relation between those factors, and because we have learnt what the independent variables & dependent variables are, we can operationalize and even prove the hypothesis, whether there is positive/negative/or no relation between the variables. Analyzing quantitative results in qualitative way or qualitative results in quantitative way also helps us to get a better understanding / insight of the subject.

    However, we also talked about the limitation of predicting the future. Sociology is not about predicting the individual case but about providing the probable foresight, view of a group as a whole.

  • 14. Ken-ichi  |  March 18th, 2007 at 3:45 pm

    I found chapter 1 of Creswell frustrating. What exactly is the research Creswell is describing? The postpositivist and constructionist approaches seem to be inline with my understand of research, that is, a principled approach to gaining knowledge. But the advocacy and pragmatic approaches he describes seem fundamentally different. Their end goal seems to be change, not knowledge. They only use knowledge, or the impression of knowledge, as a means toward effecting change. They both seem to support the ability to establish knowledge claims that are fundamentally incorrect if such claims help the researchers effect change. For instance, let us say our research problem is keeping blue people from losing control of the government. Well, if we can do some advocacy research that shows that blue people are intrinsically better at making decisions than green people, it won’t matter that volumes of existing positivist findings suggest this is not the case, so long as our findings help us keep blue people in charge. Is this really research? Creswell actually describes pragmatists believing that “we need to stop asking questions about reality and the laws of nature” (p. 12). If this is accurate, doesn’t this describe pragmatic research as opposed to knowledge, and, frankly, somewhat solipsistic? I am not trying to be hostile toward these qualitative approaches, but I am genuinely confused about their purpose.

  • 15. Bernt Wahl  |  April 3rd, 2007 at 9:30 am

    Some say statistics lie, but quantitative methods can be a very powerful tool to do analysis. It can give great incites to facts that may not be intuitive. In the book Freekenomics a great may facts are brought to life. Tony Greenfield Research Methods for Postgraduate lays out the raw functional anises on statistics. What makes statistics so important is the information you can deduce from them can give you great insights into solving problems. In can often go counter to intuition, which is good, because statistics can set you on the right path to insight.

Leave a Comment

You must be logged in to post a comment.

Trackback this post  |  Subscribe to the comments via RSS Feed


0