Assessment. . .suddenly it’s become the new buzz word of higher education. Yet the recent emphasis on assessment must be viewed with cynicism by some. A typical response might be that university professors have been doing assessment for years . . .it’s called tests, grades and the other forms of evaluation that professors have always used. So what is all this recent attention to assessment all about?
For us here at Eastern, I hope our assessment effort is a vehicle to get individual faculty to work together to view student achievement in its totality, not just in its individual dimensions in a single class. Because faculty teach students in individual courses, they understandably tend to view student achievement defined by individual class performances. Assessment, as we have begun to use it here at ENMU, measures student achievement across an entire program, and begins with a critical question: what do we want our students to be able to do at the conclusion of their program of study? These measures can be defined by their relationship to national benchmarks, or to our own set of standards locally developed. In either case, these measures should reflect our beliefs about the role of our institution, that critical concept of mission.
In an era of increasing public criticism and demands for accountability for higher education, being able to demonstrate our impact upon students is clearly crucial to maintaining public support. Yet equally important, I believe that assessment is a mechanism to increase dialogue among colleagues, an opportunity to develop a shared vision of our enterprise, and perhaps one way to help us define our collective goals.
As part of the University’s efforts to assess overall effectiveness and students' perceptions of their educational experience, the four college deans conducted a pilot trial of exit interviews of samples of graduating seniors. While the deans had conducted exit interviews in the past, the interviews conducted during the spring of 1995 were a first attempt at a coordinated effort, utilizing common questions and reporting protocols. The deans forwarded the exit interview responses to the Assessment Resource Office, which was responsible for reviewing and analyzing the data. Exit interview data is anecdotal and subjective in nature, and it is hoped that this will provide broader interpretations of the university’s effectiveness than the existing objective institutional data and surveys. These interviews asked students, among other things, about their satisfaction with ENMU, their feelings about the curriculum, and their plans for the future.
Approximately 85 students from 32 major and 15 minor programs responded to the study. Students were not limited to one response, therefore the percentages will not necessarily add up to 100. Of students reporting:
When asked what the strengths of their programs were, answers were overwhelmingly centered around the faculty:
Aspects of the program design that were considered positive were:
Nine percent of students reported weaknesses in communication either between faculty and students, between faculty and department, or between department and the external world. Other responses regarding faculty weaknesses were not enough faculty (9%), as well as unprepared, not accountable, out of date, and inaccessible faculty (each of these reporting at under 5%). Twenty-two percent of respondents reported that the facilities at ENMU suffer deficiencies such as too few lab hours and less than modern equipment being the two highest responses for this question. Problems associated with rotation of courses and scheduling courses (16%) and the need for more courses in the curriculum were mentioned by 44% of the graduating students.
In the matter of general education requirements:
Concerning the issue of diversity education, 47% of students reported having gained the most benefit by merely being exposed to people of other cultures. Thirty-one percent reported having their views of diverse groups broadened, and 38% had positive regard for the way diversity was presented in the classroom. These figures are in contrast to 2% of graduates who had negative feelings about the way diversity was presented in class, and 2% who do not believe diversity should be included in general education requirements. Another positive aspects related to general education, four students also reported being “inspired.”
Regarding services at ENMU, 56% of respondents reported that they were treated generally well, while 2% reported being treated generally poorly. Although some comments were recorded on police services, commuting student services, the bookstore, advising, the library, health services, and counseling, the service areas that overwhelmingly had the most comments were registration and financial aid. Twelve percent of students commented negatively about registration, while 20% reported having problems with financial aid. It should also be mentioned that 15% of students offered positive comments about financial aid.
As for plans for the future, 35% of the responses indicated a desire to continue education either now or at some later time, and 85% reported they would be looking for employment. Five percent of students were undecided about their future plans.
As stated at the beginning of this report, this analysis is preliminary, and it is expected that future exit interviews will yield as much and even more valuable information as to how the University is serving the students. The four college deans are working with the Assessment Resource Office to improve the exit interview process.
Description of CAAP Test Scales
Since the fall semester of 1993, ENMU has administered the Collegiate Assessment of Academic Proficiency (CAAP) to students who have completed between 55 and 65 credit hours (rising juniors). The CAAP measures students academic achievement in writing, reading, mathematics, science reasoning and critical thinking (see side bar for scale descriptions). The CAAP replaced the previously administered COMP (College Outcomes Measure Program).
In this brief analysis, CAAP results on the five scales will be examined by semester of administration. One might assume that the average of scale scores would increase from term to term, but this is not the case as evidenced here (see charts on page 4). Writing scale scores showed small increases, and then declined. Reading scale scores can best be described as unsteady, likewise for critical thinking scores. The scores in science reasoning have declined over the five administrations, and scale scores for mathematics declined and recently show some improvements.
An attempt is not made here to explain these results, none is readily apparent. It is my intention to share these results with the campus community so that others might offer explanations, and seek responses that may lead to enhanced student learning.
A limitation of this endeavor is that unequal numbers of students are captured for assessment each term. In the first administration, fall of 1993, 556 students participated; 95 participated in the spring of 1994, 17 in the summer of 1994, 148 in the fall of 1994, and 76 in the spring of 1995. Beginning with this semester, the University will once again administer the CAAP to all rising juniors on one given day—Assessment Day, scheduled for November first.
It seems, as indicative of previous assessment efforts, that a more concerted effort is required in using the results of our assessment efforts. I would like to invite members of the campus community to suggest how CAAP results might be better used to improve curriculum and/or additional analyses of CAAP results that may add insight to student learning. As always, I can be reached at extension 4313 or by email at email@example.com