ETIPS

Educational Theory into Practice Software



Embody Theory and Research


ETIPS - Make Thinking Visible

Technical Report 5:

Eric Riedel, Ph.D.

Center for Applied Research and Educational Improvement (CAREI)

University of Minnesota

David Gibson, Ph.D.

The Vermont Institutes

Abstract:

A series of statistical tests were carried out to examine the relationship between individual user characteristics (age, gender, student teaching experience, technology skill) and student performance in the ETIP cases. Measures of case outcomes include instructor-assigned essay scores, relevancy of case search, extent of case search, and proportion of search devoted to different categories of case information. Overall, individual student characteristics have a weak and inconsistent relationship to how students work with the cases. By comparison, between class variance accounted from 9 to 39 percent of the variance in case outcomes.

Original draft released on August 13, 2004. Final draft released on May 4, 2005. Correspondence regarding this paper can be directed to the first author at the Center for Applied Research and Educational Improvement (CAREI), University of Minnesota, 275 Peik Hall, 159 Pillsbury Avenue SE, Minneapolis, MN 55455, riedel@umn.edu .


Executive Summary

A series of statistical tests were carried out to examine the relationship between individual user characteristics (age, gender, student teaching experience, technology skill) and student performance in the ETIP cases. The tests were replicated on three samples of teacher education students who completed case assignments within 14 test-bed courses from spring 2002 through spring 2004. Measures of case outcomes include instructor-assigned essay scores, relevancy of case search, extent of case search, and proportion of search devoted to different categories of case information.

Overall, individual student characteristics have a weak and inconsistent relationship to how students work with the cases. Significant effects for individual characteristics that appear in one sample do not appear in or are even reversed in another sample. The exception to this pattern is the relationship between gender and relevancy of search. In all three samples, women accessed more relevant information to the ETIP case challenge then men. Other gender differences do not appear.

By comparison, differences in case performance by class grouping appear stronger. Between class differences accounted from 8.6 to 38.8 percent of the variance in case outcome measures. It is hypothesized that the lack of reliable effects for individual characteristics are due to strong differences in implementation across classes and semester including differences in instructional method, specific case assignment, and instructor assessment of student performance. These differences in implementation conditions are also responsible for the moderate class effects observed.


Introduction

The Educational Theory into Practice Software (ETIPS) originated with a grant in 2001 from the U.S. Department of Education's Preparing Tomorrow's Teachers to Use Technology (PT3) program. Since its inception these online cases were designed to provide a simulated school setting in which beginning teachers could practice decision-making regarding classroom and school technology integration guided by the Educational Technology Integration and Implementation Principles (eTIPs). In each case, users are given a case challenge based on one of these six principles about how they would use educational technology in the specific scenario[1]. They then can search out information about the school staff, students, curriculum, physical setting, technology infrastructure, community, and professional development opportunities. After responding to the case challenge in the form of a short essay, users are given feedback about their essay and case search. (Readers can view cases at http://www.etips.info/.)

The following analysis examines to what degree differences among individual students impact their performance in the ETIP cases. It builds indirectly on analyses presented earlier in Technical Reports 1 - 4 by using information about possible outcome measures. But it departs from the previous analyses by utilizing information external to the case performance to predict case performance rather than relating different measures of case performance (e.g. relevancy and essay scores) to one another.

The underlying premise of this analysis is that since the ETIP cases are an exercise in instructional technology – both in the instructional method employed (computer simulation) and the content focus (technology integration in the K12 classroom) – that they could be influenced by individual characteristics known to influence other uses of instructional technology. Three of these individual characteristics, gender, age, and technology skill, are known to have consequences for technology use based on prior research. The fourth characteristic, student teaching experience, is hypothesized to have an impact on ETIP case use specifically based on feedback from test-bed members who have worked with the cases in their classroom. The main questions addressed here are:

  •         How do individual characteristics affect the extent and quality of the information search within ETIP cases? Is there any evidence this impact may vary over multiple cases?
  •         How do individual characteristics affect the quality of student thinking in reaction to the challenges posed in the ETIP cases as reflected in short essays?

Method

Measures

Student characteristic measures were gathered from a pre-semester survey administered prior to student use of the cases. For fall 2002 through spring 2003, instructors administered paper versions of these surveys. For fall 2003 through spring 2004, a shorter version of the paper survey was administered automatically online when students logged in to the ETIP cases website.

Four student characteristics were measured on these surveys that are used here as predictive measures. Age was either generated from the birthdates given on the paper surveys or from a question that directly asked about the respondents' age on the online survey. It is coded in actual years here. Gender was asked in a similar fashion on each survey and coded as a dichotomous variable (1=female, 2=male). In both the paper and online survey, students were asked to indicate different teaching experiences they had previously. One item stated, "I have worked as a student teacher for four weeks or more", and was coded as a dichotomous variable (0=did not check, 1=checked). Finally, students were asked to assess their overall level of technological skill with the following question: "Rate your overall skill with using technology in support of your professional practice" and given the options of "1 Non-user, 2 Novice, 3 Intermediate, 4 Advanced, or 5 Expert", and coded accordingly.

The outcome measures are essay quality, relevancy of case search, extent of case search, and focus of case search. Essay quality is measured by instructor-assigned scores to each essay. In fall 2002, instructors were asked to score each student essay with six scores covering validation of the case question, evidence used in support of their decision, and the decision responding to the case question. The specific rubric is given in Appendix A. In addition, instructors assigned a seventh score as an overall judgment of the essay. An essay scale was constructed adding together each of the six scores. In spring 2003 through spring 2004, instructors were asked to score each student essay with three scores covering validation of the case question, evidence used in support of their decision, and the decision responding the case question. The specific rubric is given in Appendix A. Each of the three scores is used here as outcome variables as well as a summary scale combining all three scores.

Relevancy of case search is measured as the total number of relevant items accessed by a student in their case search. For all cases, project staff rated each piece of case information as not relevant, semi-relevant, or relevant based on how relevant the information was for answering the particular challenge associated with a given ETIP. Semi-relevant items are counted the same as not relevant items in this analysis. (See Appendix B for example of case question with semi-relevant and relevant items highlighted.)

Extent of case search is defined as the number of steps taken in a case by a student. In general, it tends to be skewed towards lower values with most students taking from 10 to 40 steps on average in a case to search for information. A step is counted if the student accesses a different piece of information from the prior information item. Multiple returns to the same piece of information are counted as multiple steps however. Finally, the proportion of case search devoted to a particular category of information constitutes the fourth outcome variable. This measure is constructed for each of the seven case information categories (school, students, staff, etc.) by dividing the number of steps taken in a category by the number of steps taken overall. This measure ignores the extent of the students' search but focuses on where the student devoted most of his or her search. (See Appendix B for list of items under each category.)

Analytic Strategy

There are three types of statistical analyses repeated for each sample in the results that follow. Given the fact that many of the measures employed here have highly skewed distributions, nonparametric statistics are used in the first two analyses. These techniques do not require assume a normal distribution in the outcome variable.

The first analyses are Spearman Rho correlations between technology skill, age, and all outcome variables. This is a nonparametric measure of association which ranges from -1 to 1. Correlation coefficients that are close to 0 indicate the absence of a linear relationship while coefficients close to -1 or 1 indicate a strong negative or positive relationship respectively. Statistically significant relationships are marked by asterisk(s). The parametric equivalent to the Spearman Rho correlation is the Pearson R correlation.

The second analyses are tests for differences in the outcome between groups. The Mann-Whitney test is used to assess whether there are differences based on gender or student teaching experience on each outcome variable. The Kruskal-Wallis test is used to assess whether there are differences based on which class the student was enrolled in on each outcome variable. The parametric equivalents to the Mann-Whitney and Kruskal-Wallis tests are independent sample t-tests and one-way analysis of variance.

Finally, since the essay summary scale and total number of relevant items accessed are measures that follow approximately normal distributions, a parametric test (one-way analysis of variance) is used with these measures to see to whether the two measures vary according to what class the student is enrolled. By dividing between subjects variance by total variance, this particular technique allows a determination of how much of the variance in the two outcome variables can be accounted for a group membership.

Sample

Three samples of students are used in this analysis. Each sample includes "test-bed" teacher education courses where the instructors assigned at least one ETIP case to their students. The first sample includes students who worked on the cases in the fall 2002 semester (see Table 1). This sample is limited to the first case assigned by an instructor who then scored the case essay according to a prescribed rubric. Multiple eTIPs were addressed in these cases.

The second sample includes students who worked on the cases in the spring 2003 semester (see Table 2). The case software changed slightly from the previous semester in that a three-part rather than seven-part rubric was used to score essays and several minor technical problems were eliminated. Students in this sample come from courses where at least three cases were assigned and scored by instructors and all the cases addressed ETIP 2.

The third sample includes students who worked on the cases from both the fall 2003 and spring 2004 semester. The case software changed from the previous spring in utilizing a new interface and including the assessment features of PlanMap and the automatic essay scorer (for those using eTIP 2). Both features encouraged more focused searches by users. Students in this sample come from courses where at least three cases were assigned and scored by instructors and all cases addressed eTIP 2.

For each of the three samples, information from a user was included if that user returned a pre-semester survey, completed each of the cases assigned in the correct order, and made use of at least four separate steps in each case. These criteria assured that the data utilized met human subjects' protection requirements, the user made a reasonable attempt to follow course instructions, and that the user did not encounter insurmountable technical problems. To assure consistency with previous technical papers, the same samples in fall 2002 and spring 2003 were used here that were used in previous papers. The consequence is that these samples are restricted to those students who responded to both pre and post-semester surveys – even though panel data is not required for the present analysis. Given that the fall 2003 – spring 2004 sample was not used in previous technical papers and did not have a high panel retention rate, response on the post-semester survey is not a criterion for inclusion in the fall 2003 – spring 2004 samples.

Table 1 . Sample of Students from Fall 2002

Instructor

Course

eTIP

N

Median Tech Skill

(1-5)

% Female

Median Age

% Student Taught

Instructor B

Foundations

2

9

3

100

28

89

Instructor G

Foundations

6

12

3

92

20

25

Instructor H 1

Methods

1

16

3

81

21

25

Instructor H 2

Methods

1

9

4

89

21

22

Instructor I

Foundations

2

5

3

80

28

40

Instructor L

Foundations

2

13

3

69

26

31

TOTAL

64

3

84

22

36

Table 2 . Sample of Students from Spring 2003

Instructor

Course

eTIP

N

Median Tech Skill

(1-5)

% Female

Median Age

% Student Taught

Instructor J

Ed Tech

2

11

2

55

21

0

Instructor K

Ed Tech

2

11

2

64

19

9

Instructor P 1

Methods

2

5

4

83

21

0

Instructor P 2

Methods

2

14

3

64

21

14

TOTAL

41

3

64

21

7.1

Table 3 . Sample of Students from Fall 2003 – Spring 2004

Instructor

Course

eTIP

N

Median Tech Skill

(1-5)

% Female

Median Age

% Student Taught

Instructor J 1

Ed Tech (Fall)

2

13

4

77

21

15

Instructor K 1

Ed Tech (Fall)

2

15

3

87

21

7

Instructor J 2

Ed Tech (Spring)

2

18

3

72

20

0

Instructor K 2

Ed Tech (Spring)

2

16

3

75

20

0

TOTAL

62

3

77%

20

4.8%

Results

The results are presented by sample in tables 4 through 17 below. The first table for each sample gives the results of correlations between technology skill, age, and case outcomes in the first two columns and differences in case outcome by gender, student teaching experience, and class in the last three columns. The second table for each sample shows the proportion of individual variance in essay quality and relevancy of search that can be attributed to between class differences rather than between individual differences.

Overall there is a lack of strong and consistent effects for any of the predictor variables studied in this analysis. The strongest pattern is related to gender. Gender does not appear to be related to the quality of the essay but does appear to be weakly related to characteristics of the case search. In at least one of the cases in all three of the samples, there was a statistically significant difference between males and females in the total number of relevant items accessed with females always accessing more relevant items. This does not appear to be simply a function of a more extensive search by females however.

Relationships between other individual characteristics and case outcome variables are either weak or nonexistent. There is no evidence from this analysis that reports of student teaching experience are related to either the quality of the case essay or characteristics of the case search. The effects of technology skill are inconsistent. The strongest effects appear in spring 2003 and show that those with higher levels of skill conducted a more extensive and focused search than those with lower levels of skill. These effects are not present in the other two samples however. The effects of age are also weak and inconsistent. Age appears to positively predict essay quality in the fall 2002 sample but negatively predicts essay quality in the fall 2003 – spring 2004 sample. There are no effects for age in the spring 2003 sample.

By contrast, class groupings appear to have a stronger impact on case outcomes than individual characteristics. With the exception of the second and third case in the fall 2003 – spring 2004 sample, differences appear between classes in case outcomes. Using analysis of variance to decompose the total variance in the measure of total number of relevant items accessed, it was found that between class variance ranged from 8.6% to 38.8% of the total variance. Using the same technique for total variance of the essay score summary scale, between class variance accounted for between 15.9% and 37.2% of the total variance.

Table 4 . Relationships Between User Characteristics and Case Outcomes in First Case, Fall 2002

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.05

.30*

n.s.

n.s.

**

Score 2

.02

.13

n.s.

n.s.

*

Score 3

.03

.17

n.s.

n.s.

n.s.

Score 4

.07

.29*

n.s.

n.s.

***

Score 5

.08

-.05

n.s.

n.s.

*

Score 6

.07

.03

n.s.

n.s.

*

Score 7

-.07

.35**

n.s.

n.s.

***

Sum of Score

.07

.17

n.s.

n.s.

*

Extent of Search

Total # of Relevant Items Accessed

.13

-.03

n.s.

* (f)

*

Total # of Steps Taken

.18

-.29*

n.s.

n.s.

***

Proportion of Search in Category

About the School

.09

-.13

n.s.

n.s.

n.s.

Staff

.03

.01

n.s.

n.s.

n.s.

Students

.18

-.24*

n.s.

n.s.

***

Curriculum & Assessment

.25*

.17

n.s.

n.s.

*

Technology Infrastructure

-.08

.10

n.s.

n.s.

n.s.

School & Community

.14

-.13

n.s.

* (f)

n.s.

Professional Development

.17

-.14

n.s.

n.s.

*

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 5 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the First Case, Fall 2002

 

F Test

% of Variance Between Classes

Sum of Essay Scores

F(5,58)=3.256

21.9%

Total # of Relevant Items Accessed

F(5,64)=2.139

14.3%

Table 6 . Relationships Between Individual Characteristics and Case Outcomes in First Case, Spring 2003

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.03

-.04

n.s.

n.s.

n.s.

Score 2

.13

-.05

n.s.

n.s.

n.s.

Score 3

-.05

-.11

n.s.

n.s.

*

Sum of Score

.02

-.15

n.s.

n.s.

*

Extent of Search

Total # of Relevant Items Accessed

.31*

-.09

n.s.

* (f)

*

Total # of Steps Taken

.40**

-.09

n.s.

n.s.

***

Proportion of Search in Category

About the School

-.21

-.13

n.s.

n.s.

n.s.

Staff

-.20

-.01

n.s.

n.s.

n.s.

Students

.11

-.04

n.s.

n.s.

n.s.

Curriculum & Assessment

-.07

-.27

n.s.

n.s.

n.s.

Technology Infrastructure

-.30

.05

n.s.

n.s.

*

School & Community

.12

.31

n.s.

n.s.

n.s.

Professional Development

.14

.23

* (no)

n.s.

n.s.

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 7 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the First Case, Spring 2003

 

F Test

% of Variance Between Classes

Sum of Essay Scores

F(3,38)=2.780

18.0%

Total # of Relevant Items Accessed

F(3,38)=4.484

26.2%

Table 8 . Relationships Between Individual Characteristics and Case Outcomes in Second Case, Spring 2003

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.03

-.16

n.s.

n.s.

n.s.

Score 2

-.00

-.30

n.s.

n.s.

n.s.

Score 3

.04

-.19

n.s.

n.s.

**

Sum of Score

.01

-.30

n.s.

n.s.

*

Extent of Search

Total # of Relevant Items Accessed

.12

-.19

n.s.

* (f)

n.s.

Total # of Steps Taken

.40 **

-.09

n.s.

n.s.

n.s.

Proportion of Search in Category

About the School

.07

-.03

n.s.

n.s.

n.s.

Staff

.09

.30

n.s.

n.s.

*

Students

.08

-.27

n.s.

n.s.

n.s.

Curriculum & Assessment

.02

-.13

n.s.

n.s.

*

Technology Infrastructure

.13

-.11

n.s.

n.s.

n.s.

School & Community

.11

-.10

n.s.

n.s.

n.s.

Professional Development

-.17

-.09

n.s.

n.s.

n.s.

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 9 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the Second Case, Spring 2003

 

F Test

% of Variance Between Classes

Sum of Essay Scores

F(3,37) = 2.983

19.5%

Total # of Relevant Items Accessed

F(3,38) = 1.194

8.6%

Table 10 . Relationships Between Individual Characteristics and Case Outcomes in Third Case, Spring 2003

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.31*

.01

n.s.

n.s.

n.s.

Score 2

.27

.15

n.s.

n.s.

n.s.

Score 3

.39*

-.02

n.s.

n.s.

n.s.

Sum of Score

.38*

.02

n.s.

n.s.

n.s.

Extent of Search

Total # of Relevant Items Accessed

.04

-.21

n.s.

n.s.

**

Total # of Steps Taken

.40*

-.09

n.s.

n.s.

n.s.

Proportion of Search in Category

About the School

-.15

.25

n.s.

n.s.

n.s.

Staff

.28

.23

n.s.

n.s.

***

Students

-.28

.14

n.s.

* (f)

n.s.

Curriculum & Assessment

.35*

-.24

n.s.

* (f)

*

Technology Infrastructure

-.48**

-.11

n.s.

n.s.

***

School & Community

-.41**

.06

n.s.

n.s.

*

Professional Development

-.18

.05

n.s.

n.s.

*

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 11 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the Third Case, Spring 2003

 

F Test

% of Variance Between Classes

Sum of Essay Scores

F(3,37) = 2.332

15.9%

Total # of Relevant Items Accessed

F(3,38) = 8.020

38.8%

Table 12 . Relationships Between Individual Characteristics and Case Outcomes in First Case, Fall 2003 – Spring 2004

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

-.08

-.23

n.s.

n.s.

n.s.

Score 2

-.05

-.25

n.s.

n.s.

***

Score 3

.04

-.34**

n.s.

n.s.

***

Sum of Score

-.03

-.33*

n.s.

n.s.

***

Extent of Search

Total # of Relevant Items Accessed

-.04

.10

n.s.

n.s.

*

Total # of Steps Taken

.17

.24

n.s.

n.s.

*

Proportion of Search in Category

About the School

-.10

-.19

n.s.

n.s.

n.s.

Staff

.04

-.21

* (yes)

n.s.

n.s.

Students

.07

.05

n.s.

n.s.

n.s.

Curriculum & Assessment

-.17

.14

n.s.

n.s.

n.s.

Technology Infrastructure

-.09

-.27*

n.s.

n.s.

n.s.

School & Community

-.01

.14

n.s.

n.s.

n.s.

Professional Development

-.18

-.26*

n.s.

n.s.

n.s.

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 13 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the First Case, Fall 2003 – Spring 2004

 

F statistic

% of Variance Between Classes

Sum of Essay Scores

F(3,56)=10.571

36.2%

Total # of Relevant Items Accessed

F(3,58)=3.585

15.6%

Table 14 . Relationships Between Individual Characteristics and Case Outcomes in Second Case, Fall 2003 – Spring 2004

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.06

-.21

n.s.

n.s.

n.s.

Score 2

-.08

-.20

n.s.

n.s.

n.s.

Score 3

.37**

-.10

n.s.

n.s.

n.s.

Sum of Score

.15

-.20

n.s.

n.s.

n.s.

Extent of Search

Total # of Relevant Items Accessed

-.07

-.13

n.s.

* (f)

n.s.

Total # of Steps Taken

.17

.24

n.s.

n.s.

n.s.

Proportion of Search in Category

About the School

-.05

-.20

n.s.

n.s.

n.s.

Staff

-.17

-.14

n.s.

n.s.

n.s.

Students

-.09

.12

n.s.

n.s.

n.s.

Curriculum & Assessment

-.09

-.11

n.s.

n.s.

n.s.

Technology Infrastructure

.18

-.03

n.s.

n.s.

n.s.

School & Community

-.02

.17

n.s.

n.s.

n.s.

Professional Development

-.07

-.12

n.s.

n.s.

n.s.

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 15 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the Second Case, Fall 2003 – Spring 2004

 

F statistic

% of Variance Between Classes

Sum of Essay Scores

F(3,54)=10.976

37.9%

Total # of Relevant Items Accessed

F(3,58)=3.611

15.7%

Table 16 . Relationships Between Individual Characteristics and Case Outcomes in Third Case, Fall 2003 – Spring 2004

 

Measures of Association (Correlation Coefficients)

Differences Between Groups (Significance Tests)

Tech

Skill

Age

Student Teach

Gender

Class

Essay Scores

Score 1

.04

-.10

n.s.

* (f)

n.s.

Score 2

.09

-.14

n.s.

* (f)

n.s.

Score 3

.05

-.23

n.s.

n.s.

n.s.

Sum of Score

.05

-.18

n.s.

* (f)

n.s.

Extent of Search

Total # of Relevant Items Accessed

.11

.04

n.s.

n.s.

n.s.

Total # of Steps Taken

.17

.24

n.s.

n.s.

n.s.

Proportion of Search in Category

About the School

-.01

-.11

n.s.

n.s.

n.s.

Staff

-.09

.07

n.s.

n.s.

n.s.

Students

-.06

.14

n.s.

n.s.

n.s.

Curriculum & Assessment

.07

.05

n.s.

n.s.

n.s.

Technology Infrastructure

.01

-.24

n.s.

n.s.

n.s.

School & Community

.11

-.03

n.s.

n.s.

n.s.

Professional Development

-.22

-.23

n.s.

n.s.

n.s.

* p < .05, ** p < .01, *** p < .001, n.s. =not significant

Table 17 . Proportion of Variance in Essay and Relevancy Scores Attributable to Between Class Differences in the Third Case, Fall 2003 – Spring 2004

 

F statistic

% of Variance Between Classes

Sum of Essay Scores

F(3,54)=6.370

26.1%

Total # of Relevant Items Accessed

F(3,58)=2.350

10.8%

Discussion

The lack of effects for individual student characteristics is surprising given expectations from prior research and feedback from test-bed faculty. In particular, reports from faculty that student experience in the classroom shaped their experience with the cases were not confirmed in these analyses. Only one consistent effect was observed – female students appear to access more relevant information than male students. The reasons for this difference could either be that women are more likely to seek out a particular category of information such as "students" that happens to be relevant to the eTIP or that women devote more attention to the quality of their search in general than men. Given that other differences in the number of steps taken or particular case information categories accessed, the results suggest a more general difference in how women and men approach the cases.

The particular class that a student belonged to was much more predictive of their case performance than characteristics about individual students – sometimes accounting for a third or more of the variance in the case outcome measure. This is likely due to the considerable variation in how instructors implemented the cases in their courses. This variation includes differences in how the cases were framed in the course, expectations about student use of the cases, and how the case essays were scored. This is in addition to the changes in software that occurred over the two years of test-bed implementation.

A stronger test for the impact of individual differences could occur under a different set of conditions surrounding data collection. These include strict controls on the methods of case implementation and instructor assessment of essays. Alternatively, more elaborate statistical tests could be employed to detect effects for individual characteristics if the number of classes and students sampled were far larger than those involved here. Even then, given the present results, future analyses would be unlikely to turn up powerful effects for student characteristics on initial use of the cases.


Appendix A: Essay Score Rubrics

Table A.1. Summary of Rubric Score Criteria (Fall 2002)

Score

Criterion

1

Validation: Explains central challenge.

2

Evidence: Identifies factors in the case related to the challenge.

3

Evidence: Analyzes range of options for addressing challenge noting their advantages and disadvantages.

4

Evidence: States a decision or recommendation for implementing an option or change in response to the challenge.

5

Decision: Explains a justifiable rationale for the decision or recommendation.

6

Decision: Describes anticipated results of implementing the decision or recommendation.

7

Essay meets or does not meet expectations for all six decision making criteria.

Table A.2. Summary of Rubric Score Criteria (S pring 2003 )

Score

Criterion

1

Validation: Explains central challenge.

2

Evidence: Identifies case information that must be considered in meeting the challenge.

3

Decision: States a justified recommendation for implementing a response to the challenge.

Appendix B: Example of Case with Relevant Items Highlighted

The following example illustrates how relevancy is applied in one of the ETIP cases. It is taken from a case with an urban, middle school called Cold Spring in which the instructor assigned questions pertaining to eTIP2 ("added value"). The case challenge reads as follows:

This case will help you practice your instructional decision making about technology integration. As you complete this case, keep in mind eTIP 2: technology provides added value to teaching and learning. Imagine that you are midway through your first year as a seventh grade teacher at Cold Springs Middle School, in an urban location. A responsibility of all teachers is to differentiate their lessons and instruction in order to accommodate for the varying learning styles, abilities, and needs of students in their classrooms and to foster students' critical and creative thinking skills. As a new teacher at Cold Springs Middle School, you will be observed periodically throughout the first few years of your career. One of the focuses of these observations is to analyze how well your instructional approaches are accommodating students' needs. The principal, Dr. Kranz, was pleased with your first observation. For your next observation she challenged you to consider how technology can add value to your ability to meet the diverse needs of your learners, in the context of both your curriculum and the school's overall improvement efforts.She will look for your technology integration efforts during your next observation.

On the case's answer page, you will be asked to address this challenge by making three responses:

1. Confirm the challenge: is the central technology integrationWhat challenge in regard to student characteristics and needs present within your classroom?

2. Identify evidence to consider: What case information must be considered in a making a decision about using technology to meet your learners' diverse needs?

3. State your justified recommendation: What recommendation can you make for implementing a viable classroom option to address this challenge?

Examine the school web pages to find the information you need about both the context of the school and your classroom in order to address the challenge presented above. When you are ready to respond to the challenge, click "submit answer".

After reading the challenge, the user would then search for information relevant to the questions posed. The table below lists all the information categories and individual items in those categories available for searching in all cases. The information items relevant to this particular case (eTIP 2) are highlighted. Relevant information is in bold and semi-relevant information is in bold and italics. Note that this table serves as a key for examination of individuals in two selected classes presented later in the paper.

Table B.1. Sample Problem Space with Relevant Information

CATEGORY

INDIVIDUAL INFORMATION ITEMS

Prologue (1)

Prologue=1

About the School (2-11)

Mission Statement=2; School Improvement Plan=3; Facilities=4; School Map=5; Student Demographics=6; Student Demographics Clipping=7; Performance=8; Schedule=9; Student Leadership=10;

Student Leadership Artifact=11

Staff (12-22)

Staff Demographics=12; Staff Demographics Talk=13; Mentoring=14; Staff Leadership=15; Staff Leadership; Talk=16; Faculty Schedule=17; Faculty Meetings=18; Faculty Talk=19; Faculty Meetings Artifact=20; Faculty Contract=21; Faculty Contract Talk=22

Curriculum and Assessment (23-30)

Standards=23; Instructional Sequence=24; Computer Curriculum=25; Classroom Pedagogy and Assessment=26 ; Teachers=27; Talk=28; Talk 2=29; Clipping=30

Technology Infrastructure

(31-42)

School Wide Facilities=31; Library / Media Center=32; Classroom-Based Facilities=33 ; Classroom-Based Software Setup=34; Community Facilities=35; Technology Support Staff=36; Policies and Rules=37; Policies Clipping=38; Technology Committee=39; Technology Committee Talk=40; Technology Survey Results=41; Technology Plan and Budget=42

School Community Connections (43-48)

Family Involvement=43; Family Involvement Clipping=44; Business Involvement=45; Business Involvement; Clipping=46; Higher Education Involvement=47; Community Resources=48

Professional Development

(49-68)

Professional Development Content=49; Professional Development Content Area=50; Resources=51; Professional Development Leadership=52; Professional Leadership=52; Professional Leadership Talk=53

Professional Development Talk=53; Learning Community=54; Learning Community Talk=55; Professional Development Process Goals=56; Professional Development Data=57; Professional Development Data; Artifact=58; Professional Development Evaluation=59; Professional Development Evaluation Talk=60;

Professional Development Research=61; Professional Development Research Artifact=62; Professional Development Design=63; Professional Development Design Talk=64; Professional Development Learning=65

Professional Development Learning Artifact=66; Professional Development Collaboration=67; Professional Development Collaboration Artifact=68

Epilogue (69)

Epilogue=69

Essay (70)

Essay=70

Bold items have high relevance. Bold, italicized items have medium relevance.


[1] These six principles state the conditions under which technology use in schools has been demonstrated to be most effective. Case 1: Learning outcomes drive the selection of technology. Case 2: Technology provides added value to teaching and learning. Case 3: Technology assists in the assessment of learning outcomes. Case 4: Ready access to supported, managed technology is provided. Case 5: Professional development targets successful technology integration. Case 6: Professional community enhances technology integration and implementation. See Dexter, S. (2002). eTIPS-Educational technology integration and implementation principles. In P. Rodgers (Ed.), Designing instruction for technology-enhanced learning (pp.56-70). New York: Idea Group Publishing.