RADAR generates three summary files per student and a global file that summarizes data for all participants in an experiment. The last file can be exported to a spreadsheet and a statistical program. The process of generating these files is not automatic, and it requires the researcher to complete a series of three steps. First, the researcher selects the file with the definition of a Read&Answer experiment (.rnw extension). Second, the researcher defines and introduces a number of variables for the analysis. Some variables are obligatory. For instance, the number of words in a region must be introduced so that RADAR can calculate the time/word reading for a region. The correct answer to a multiple-choice question must also be introduced, so that RADAR can score the number of correct answers. Some variables relating pieces of a text and questions are also obligatory. For instance, the researcher must also define text information relevant to answering a specific question. Some other variables are optional. For example, RADAR allows the researcher to define and categorize either text regions or questions. As a third step, once the definition of the experiment has been completed, we can specify the set of sequences we want to analyze with RADAR, and this process will generate the output files to be exportable to data spreadsheets and/or statistical programs.
As indicated earlier, Read&Answer is mainly used for research on task-oriented reading. Within this approach, the role of different types of tasks on comprehension processes and learning from texts is an important research topic. For example, Cerdan and Vidal-Abarca (2008) used a previous version of Read&Answer to compare the effects of two tasks, writing an essay and answering shorter intra-text questions, on integrating information across different texts. They found that writing an essay was more effective at the deep level of comprehension than answering intra-text questions, whereas no difference was apparent at the superficial level of understanding. Read&Answer provided on-line evidence supporting this conclusion. Students who wrote the essay showed more integrative behavior, consisting of reading a relevant piece of information and then reading another non-consecutive closely-related relevant piece of information, than those who answered intra-text questions. In a follow-up study, Cerdán et al. (2009) found that high-level questions facilitated deep comprehension, but not immediate performance or delayed recall of text, and that both high- and low-level questions differentially affected text-inspection patterns. On-line evidence supporting these conclusions was also found: e.g., high-level questions made students relate separate pieces of text information more often than low-level questions.
Example Of The Process Analysis Essay ramino ballx small j
2ff7e9595c
Comments