G
Guillaume Cabanac
Hi guys,
I am a PhD student in Computer Science, studying human perception of
consensus in threaded discussions (like in Usenet and Web forums).
In order to experiment this with real people online (see below for more
details), I have developed a Java/Swing application coupled with an Oracle
relational database.
I am currently looking for people to take part. If you are interested, feel
free to Java Web Start the experiment from
http://www.irit.fr/~Guillaume.Cabanac/expe .
Do not hesitate to send me your feedback and comments
Thank you in advance.
-------------------------------------------------------------------------
Guillaume Cabanac http://www.irit.fr/~Guillaume.Cabanac
PhD student in Computer Science
IRIT - Computer Science Research Institute of Toulouse University, France
Information Systems team
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*==
** What is The Task of a Participant in This Experiment? **
In the proposed experiment, a "participant" has to evaluate 13 argumentative
discussion threads (a discussion thread is a tree whose nodes contain
people's statements; they are chronologically organized as in Usenet or Web
forums). You can see a screen capture of a discussion thread evaluation
here: http://www.irit.fr/~Guillaume.Cabanac/expe/example.png.
Evaluating a discussion thread requires 2 steps:
1. The participant labels each node by identifying its *opinion*: does the
node confirm (pro), refute (against) or is neutral regarding its direct
parent? On the screen capture, the participant has labeled the nodes thanks
to the "flag" buttons (nodes are then displayed in color). For example, Tom
refutes Bob, which is refuting Alice.
2. For each replied node, the participant synthesizes the opinions of its
replies. This *mental synthesis* value ranges from refutation to
confirmation. On the example, the participant feels that Bob's statement is
rather confirmed, and Alice's one is almost refuted.
** What is The Aim of This Experiment? **
The main aim of this experiment is to compare i) mental synthesis of
opinions that are expressed within discussion threads with ii) the results
of the 2 algorithms that we have developed.
These algorithms take as input an opinion-labeled discussion thread, they
compute a value that corresponds to the synthesis of the opinions, ranging
from "the root statement is refuted" to "the root statement is confirmed".
The first algorithm computes a statistical score whereas the second one is
based on a AI research framework (bipolar argumentation framework). If you
are interested in these algorithms, I may send you a research paper
(RIAO'2007) upon request by mail.
A minor aim is to check if people label the discussion thread in the same
way (step 1).
** Possible Applications of This Research **
The main topic of my Computer Science thesis work is "digital annotation" on
the Web where amounts of digital documents are freely accessible. With a
classical Web browser, people are rather passive as they can only read
documents. Indeed, one cannot indicate a mistake he has found, ask a
question, link the document to another one or simply express his thought.
In order to enable people to interact with a digital document, "annotation
systems" have been developed since the early 1990's, cf. (Wolfe, 2002,
http://dx.doi.org/10.1016/S8755-4615(02)00144-5). Such software make it
possible to annotate every digital document the same way as paper, for
personal purposes, e.g. critical reading, proofreading, learning, etc.
Moreover, as modern computers are networked, digital annotation can be
stored in a common database. This makes it possible to display a document
along with its annotations that may come from numerous readers all over the
world. Then readers can reply to annotations and also to replies, forming
"discussion threads" that are displayed in the context of commented
documents.
When documents are massively annotated (see a video demonstration
(http://g.cabanac.free.fr/publications/2005-11-IWAC/demoAmaya.wmv) with the
Amaya annotation system) and discussed (each annotation can spark off a
discussion thread), it seems to me that the reader is overwhelmed. Reading
an annotation, its replies that are hierarchically organized and synthesize
their opinions is a difficult task.
In order to overcome this problem, annotation systems could compute the
"social validation" of each annotation. This requires that annotators give
an explicit opinion type to they annotations; NLP algorithms can also be
applied, e.g. (Pang et al., 2002,
http://portal.acm.org/citation.cfm?id=1118704&dl=GUIDE). Then, the reader
can decide to focus on discussions that have reached consensus (totally
refuted or confirmed). On the other hand, he may focus on ongoing
discussions identified by a neutral social validation. Moreover,
intra-discussion thread social validations may guide the reader that can
identify "supporting" and "defeating" branches.
I hope that these explanations help to understand the aims of my experiment.
Please, let me know what you think about this experiment.
Guillaume Cabanac.
I am a PhD student in Computer Science, studying human perception of
consensus in threaded discussions (like in Usenet and Web forums).
In order to experiment this with real people online (see below for more
details), I have developed a Java/Swing application coupled with an Oracle
relational database.
I am currently looking for people to take part. If you are interested, feel
free to Java Web Start the experiment from
http://www.irit.fr/~Guillaume.Cabanac/expe .
Do not hesitate to send me your feedback and comments
Thank you in advance.
-------------------------------------------------------------------------
Guillaume Cabanac http://www.irit.fr/~Guillaume.Cabanac
PhD student in Computer Science
IRIT - Computer Science Research Institute of Toulouse University, France
Information Systems team
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*==
** What is The Task of a Participant in This Experiment? **
In the proposed experiment, a "participant" has to evaluate 13 argumentative
discussion threads (a discussion thread is a tree whose nodes contain
people's statements; they are chronologically organized as in Usenet or Web
forums). You can see a screen capture of a discussion thread evaluation
here: http://www.irit.fr/~Guillaume.Cabanac/expe/example.png.
Evaluating a discussion thread requires 2 steps:
1. The participant labels each node by identifying its *opinion*: does the
node confirm (pro), refute (against) or is neutral regarding its direct
parent? On the screen capture, the participant has labeled the nodes thanks
to the "flag" buttons (nodes are then displayed in color). For example, Tom
refutes Bob, which is refuting Alice.
2. For each replied node, the participant synthesizes the opinions of its
replies. This *mental synthesis* value ranges from refutation to
confirmation. On the example, the participant feels that Bob's statement is
rather confirmed, and Alice's one is almost refuted.
** What is The Aim of This Experiment? **
The main aim of this experiment is to compare i) mental synthesis of
opinions that are expressed within discussion threads with ii) the results
of the 2 algorithms that we have developed.
These algorithms take as input an opinion-labeled discussion thread, they
compute a value that corresponds to the synthesis of the opinions, ranging
from "the root statement is refuted" to "the root statement is confirmed".
The first algorithm computes a statistical score whereas the second one is
based on a AI research framework (bipolar argumentation framework). If you
are interested in these algorithms, I may send you a research paper
(RIAO'2007) upon request by mail.
A minor aim is to check if people label the discussion thread in the same
way (step 1).
** Possible Applications of This Research **
The main topic of my Computer Science thesis work is "digital annotation" on
the Web where amounts of digital documents are freely accessible. With a
classical Web browser, people are rather passive as they can only read
documents. Indeed, one cannot indicate a mistake he has found, ask a
question, link the document to another one or simply express his thought.
In order to enable people to interact with a digital document, "annotation
systems" have been developed since the early 1990's, cf. (Wolfe, 2002,
http://dx.doi.org/10.1016/S8755-4615(02)00144-5). Such software make it
possible to annotate every digital document the same way as paper, for
personal purposes, e.g. critical reading, proofreading, learning, etc.
Moreover, as modern computers are networked, digital annotation can be
stored in a common database. This makes it possible to display a document
along with its annotations that may come from numerous readers all over the
world. Then readers can reply to annotations and also to replies, forming
"discussion threads" that are displayed in the context of commented
documents.
When documents are massively annotated (see a video demonstration
(http://g.cabanac.free.fr/publications/2005-11-IWAC/demoAmaya.wmv) with the
Amaya annotation system) and discussed (each annotation can spark off a
discussion thread), it seems to me that the reader is overwhelmed. Reading
an annotation, its replies that are hierarchically organized and synthesize
their opinions is a difficult task.
In order to overcome this problem, annotation systems could compute the
"social validation" of each annotation. This requires that annotators give
an explicit opinion type to they annotations; NLP algorithms can also be
applied, e.g. (Pang et al., 2002,
http://portal.acm.org/citation.cfm?id=1118704&dl=GUIDE). Then, the reader
can decide to focus on discussions that have reached consensus (totally
refuted or confirmed). On the other hand, he may focus on ongoing
discussions identified by a neutral social validation. Moreover,
intra-discussion thread social validations may guide the reader that can
identify "supporting" and "defeating" branches.
I hope that these explanations help to understand the aims of my experiment.
Please, let me know what you think about this experiment.
Guillaume Cabanac.