[Joe Montgomery]

  1. Comprehensiveness
The placement assessment system consists of a toolbox of assessments and sources of evidence that meet a variety of different purposes and that provide various users with information they need to make decisions.
Multiple instruments are used in the assessment and placement process.


Final decisions about placement should take into consideration other student characteristics and factors that influence success (e.g., goals, attributes, special needs, etc.). [AMATYC]

If for financial or even programmatic reasons the initial method of placement is somewhat reductive, instructors of record should create an opportunity early in the term to review and change students’ placement assignments, and uniform procedures should be established to facilitate the easy re-placement of improperly placed students.


Background

The Standards for Educational and Psychological Measurement (American Educational Research Association, 1999) requires that multiple measures be used for educational placement decisions. Standard 13.7 states that: “In educational settings, a decision or characterization that will have a major impact on a student should not be made on the basis of a single test score. Other relevant information should be taken into account if it will enhance the overall validity of the decision” (p. 146). This standard requires that a comprehensive assessment involve multiple measures, with data from multiple sources, such as school records or classroom observations as well as test scores. The use of multiple instruments is likely to significantly increases placement accuracy. Hutchins and Scott-Clayton (2011), in their review of the community college placement literature for the Community College Research Center (CCRC), felt that the limited validity evidence for the COMPASS and ACCUPLACER tests is partly due to their use in isolation. In fact, in spite of the strong case for multiple measures, community colleges throughout the country mainly use a single measure for placement (ibid). Not surprisingly, researchers in this area uniformly recommend a shift to using multiple measures. Even the publishers of the two leading placement tests, ACCUPLACER and COMPASS, recommend the use of multiple measures, rather than relying on their own tests alone.



Multiple Measures

The multiple measures could include other tests, the use of high school transcript information, self-placement recommendations, or one or more non-cognitive measures. These options are briefly summarized below:



High school transcripts: A number of studies have shown that high school transcript data are good predictors of college performance. Hughes and Scott-Clayton (2011) documented several studies in which high school performance was a significant predictor of college performance, in some cases exceeding the validity of test results. Studies in Washington State by Montgomery (2007) and by Stern and Pavelcheck (2007) showed significant correlations between high school math GPA and college math and overall college GPA. Both of these studies showed that students needed to complete a post Algebra II level of high school math and earn a cumulative math GPA of 3.0 to be placed in successfully in college level math.



Because high school transcript information is difficult to obtain (as well as process and store) for community colleges, several researchers have examined the use of self-reports of high school performance. Marwick (2002) found that such self-report data significantly predicted college performance and noted other studies in which self-reports of high school performance exceeded test results in predicting college success. The Hughes and Scott-Clatyon (2011) review also cited several research studies in which self-reports of high school performance were successfully used as predictors. Marwick (2004) believed that the correlation of high school and college grades was due, in part, to the fact that both reflect student motivation and persistence.

Student input or Self-placement: Self-placement was used fairly extensively during the 1970s, as a negative reaction to testing outcomes and at a time when placement was viewed unfavorably by many. This option gave students more responsibility for placement and acknowledged their “right to fail”. High rates of failure and dropping out led to the demise of this approach. However, student input into the placement decision, as a data point among multiple data points, has potential advantages. Self-choice could help avoid placing students too low, increase student responsibility and accountability, and improve the acceptance of placement decisions. Soliciting students’ perceptions of the most appropriate level of placement could establish a starting-point for discussions with advisors or faculty members regarding an appropriate placement decision. Marwick (2002) used self-choice as one option in his research design, but did not fully evaluate its effectiveness.


Diagnostic measures: A number of computer- or web-based diagnostic tools are becoming available, including diagnostic options for the ACCUPLACER and COMPASS tests. Diagnostic measures differ from conventional placement tests in that they provide detailed information on student strengths and weaknesses in specific content areas. Diagnostic information is designed to be highly actionable, in terms of identifying appropriate areas for remediation and for tracking improvements. Placement test information, on the other hand, provides information for a placement decision but not enough for obtaining a deeper understanding of student skill and knowledge levels. Consequently, diagnostic information can provide a valuable complement to placement information. However, diagnostic testing may take more time to complete and additional time for interpretation and intervention planning. The users of diagnostic information will need to have a clear picture in mind of how specific patterns of diagnostic results, showing strengths and weaknesses, relate to appropriate placement decisions. In particular, use of diagnostic information for placement would require users to operationalize the skill levels required for placement at different course levels.

Non-cognitive measures: Considerable research supports the idea that student personality, affective, and other individual characteristics are related to academic performance, including college grades, retention, graduation, and other outcomes (Hughes & Scott-Clayton, 2011). These characteristics include positive self-concept, realistic self-appraisal, preference for long-term goals, and availability of support systems. For example, extensive research has shown that students with higher levels of self-efficacy not only perform better academically in general, but (more specifically) are better at solving conceptual problems, manage their time better, show greater persistence, show greater strategic flexibility, set higher aspirations, and are more accurate in evaluating their own performance than are low-efficacy students (Bandura, 1997). Research has consistently shown that self-efficacy and cognitive skills are somewhat related, but also that they are clearly distinct. So, for a given level of cognitive skills, students may display a wide range of self-efficacy beliefs, depending on how they interpret, store, and recall successes and failures.

Other instruments are available which are related to learning strategies, learning styles, attitudes, study skills, test anxiety, and personality variables. For example the Learning and Study Strategies Inventory (LASSI), developed by Claire Weinstein at the University of Texas, Austin, assesses learning skills, attitudes, motivation, anxiety, time management, concentration, and use of study aids. The LASSI has mainly been used as a diagnostic tool but could be used for input into placement decisions. The Big 5 Personality Inventory (NEO PI-R) by Costa and McCrae (1985), probably the best-researched and most-used personality inventory, could also be considered as a placement instrument. Of the five major dimensions, several have been shown to be predictive of academic performance, particularly the domain scales of Openness to Experience and Conscientiousness.

More recently, research in the area of Positive Psychology has found relationships between affect and cognitive performance, suggesting that affect can also be predictive of school performance. For example, Fredrickson (2009) showed that more positive levels of affect were associated with creativity, ability to conceptualize, critical thinking, integrating information, improved decision-making, better negotiating performance, greater resiliency and stress resistance, wider focus of attention, and better retention of information. Her “Broaden and Build” theory asserted that more positive levels of affect significantly improve neural functioning and performance.

At this point, non-cognitive measures are rarely used, probably due to the lack of awareness of the measures and the research behind them, but also due to the scarcity of time and resources at community colleges (Hughes & Scott-Clayton, 2011).

Use of Multiple Measures

There are several possible ways to incorporate multiple measures into the placement process. For example, one could use statistical algorithms to combine the information, such as converting individual placement scores into a common metric (such as a z-score) then computing a total or an average score. That would be a unit-weighted approach. Alternatively, the individual test scores could be weighted to reflect the greater or lesser importance of some measures. Standards for relating the combined score values into placement levels would then need to be defined.

Marwick (2002) used a simpler approach: she considered the placement level suggested by each measure separately. Then, she adopted the measure that suggested the highest level of placement. Her research showed that, when some measures suggested a higher level of placement while other measures suggested a lower level, placement at the higher-level resulted in improved course success. She felt that the placement process should strive to avoid the error of placing students into lower-level courses when they had a likelihood of succeeding at the higher-level.

Correction for Placement Errors

Regardless of the process used in placement there are likely to be errors, either placing students in lower-level courses when they could have performed well at higher levels, or placing students in courses that are too difficult and which they are likely to fail. An important part of a good placement process is to allow for corrections in errors.

One approach is to placement corrections is to conduct additional diagnostic testing early in the quarter, in the courses to which students are assigned by the placement process. Such diagnostic testing should be done as early as possible to allow the successful transfer of mis-placed students to higher- or lower-level courses. If the diagnosis can be done within the first or second day of class, the transfer process can be easily handled. However, if the diagnosis takes several weeks, then some higher- and lower-level courses would need to use a “staggered start” and begin two weeks after the start of the quarter, to accommodate transferring students. The course material would then need to be taught in a shortened time frame. The two-week diagnostic approach, with staggered starts for some classes, has been used effectively in some Michigan community colleges (Nelson, personal communication, 2010). An additional method to correct for potential placement errors is to allow an appeals process for students who believe they have been inappropriately placed. Students could contact the Department Dean or a designated faculty member, provide a rationale for an alternative placement, and attempt to negotiate the desired placement. Currently, most colleges probably accommodate student challenges to placement, but do so on a case-by-case basis, with only the most motivated or assertive students opting to challenge. A more equitable approach would be to educate students, prior to placement testing, regarding their options after testing. Educating students about this option could occur at the same time students are learning of the high stakes nature of the testing process.