Assessment Process
SCSU systemically measures student gains and uses data and analysis to inform ongoing strategic planning, resource allocation, recruitment, and promotion of the university. SCSU’s Office of Assessment and Planning partners with task forces and programs across campus to conduct multiple assessments that are used to diagnose and respond to student needs and to identify and support factors that impact student success.
Click on the following headings for more information.
The cores of such surveys as the Academic Program Review Student Survey and the Southern Experience Survey were developed by students in research methodology courses. During the first year of implementation of the First-Year Experience (FYE) Program, students approached the director with complaints. In response, the director of assessment invited the students to develop a survey that would identify areas in need of improvement. Every Tuesday for a semester, the director of assessment met with the students, taught them principles of psychometrics, and guided them in writing items for the survey. This was the origin of the FYE Self-Assessment that is currently administered to all first-year students. Similarly, commuter students approached the Office of Assessment and Planning and requested that the office develop a survey about the issues that commuter students face. The director agreed to administer a commuter student survey with one condition: The commuter students had to develop the items for the survey. In this way, the Commuter Student Survey emerged. With the guidance of the Director of the Office of Assessment and Planning, students analyze and interpret data.
In 2007, Southern Connecticut State University (SCSU) initiated a comprehensive First-Year Experience (FYE) Program to promote student engagement, improve students’ academic competencies, and boost retention rates. Only 50% of incoming students were enrolled in a first-year experience seminar; this provided an opportunity for the university to measure the impact of an FYE seminar on student success. The two groups of students were comparable in terms of their demographic profiles. Yet, the seminar participants demonstrated significantly higher rates of retention, higher GPAs, and more credits earned than students who did not participate in the seminar even three years later. Measures of crystallized learning (high school rank, SAT scores) and demographic data (e.g., adjusted family income) were found to be relatively weak predictors of student success. This study identified a psychological-educational factor that is amenable to change—future orientation—for explaining the difference in outcomes between the FYE seminar and non-seminar students. In addition to future orientation, what other changeable factors predict students’ persistence, academic success, and graduation outcomes?
To answer this research question, the university’s longitudinal, cohort datasets from the incoming classes of students from 2007-2015 were merged and analyzed (n=16,263). The Office of Assessment and Planning conducts longitudinal, cohort studies in order to identify patterns and anomalies in student persistence and graduation. The aim is to alleviate the conditions that lead to student withdrawal and strengthen the conditions that promote students’ academic success. All first-time, full-time undergraduate students are included in the longitudinal, cohort studies. The students are followed from New Student Orientation through graduation from the university, or subsequent enrollment in other colleges and universities. As each incoming class enters the university, a cohort dataset is established. A cohort dataset initially contains such demographic information as high school rank, high school GPA, SAT scores, gender, ethnicity, residential status, registration with Veterans Services, and English and Math placements. Each year, new data are added, including earned credits, cumulative GPA, registration status, and scores on surveys and direct assessments.
Surveys and assessments include the Beginning College Survey of Student Engagement (BCSSE), the First-Year Experience Self-Assessments (locally-developed instruments), the National Survey of Student Engagement (NSSE), the Southern Experience Survey (a locally-developed continuing student survey), and the Collegiate Learning Assessment Plus (CLA+), a performance-based assessment which is administered to freshmen and seniors. The surveys that are administered to the students are comprised of items that measure both learning and young adult development. Also added to the datasets are students’ scores on papers submitted to the Multi-State Collaborative to Advance Quality Student Learning, a collaboration among higher education institutions, the Association of American Colleges and Universities (AAC&U), and the State Higher Education Executive Officers (SHEEO). This organization is the national association of the chief executives of statewide governing, policy, and coordinating boards of postsecondary education. Freshmen and seniors submit de-identified copies of their final papers, which are scored on AAC&U rubrics by faculty in other states. Dr. Ben-Avie is on the national steering committee of the Multi-State Collaborative.
To hone in on the most important predictors of persistence, academic success, and graduation outcomes, analyses were conducted using IBM Watson™ Analytics, a cognitive data discovery service available on the cloud. Subsequently, further analyses were conducted in SPSS Statistics, AMOS, R, and with machine learning algorithms. A consistent pattern that emerged from the analyses was the predictive strength of the First-Year Experience (FYE) Self-Assessment and the Southern Experience Survey, a continuing student survey that is administered to sophomores and juniors. These complementary surveys measure the relationship between students’ learning and their development. The strategy of conducting longitudinal, cohort studies reflects the assessment approach of the university: Since learning and development are incremental processes, studies that follow students over time are essential.
The longitudinal, cohort datasets are used to empirically evaluate the model in which students’ demonstration of the competencies that employers desire in new hires is a function of their developmental trajectories, ability to work autonomously and to handle cognitive complexity, learning, and an orientation to the future that informs goal setting and taking actions in the here-and-now to achieve desired futures. Most of all, it is all about relationships.
New surveys are driven by questions that the university community has about the students and how well the university functions to promote their learning and development. Surveys are developed by teams with representatives from different offices across campus, including faculty, non-instructional staff responsible for student support services (including the Registrar’s Office), student affairs professionals, and the university leadership. As noted earlier, the cores of the institutional surveys tend to be written by students in research methodology courses. Once drafts of surveys are completed, they are shared widely through the university community in order to elicit feedback and additional items. For example, the draft of the Transfer Student Survey was improved by the recommendations of the committee on transfer students, the enrollment management team, student affairs, the academic transfer student office, academic advisement, and university leadership.
Liberal Education Program (LEP) affinity groups comprised of faculty teaching in specific Areas of Knowledge and Experience developed rubrics to measure what is important to them in terms of student learning and success. In general, the process of developing a common, shared rubric—a rubric that would directly measure a competency embedded within the Liberal Education Program—would begin by faculty members sharing their favorite assignments with their colleagues in other departments. Instead of only relying on standardized tests, the interdisciplinary LEP affinity groups invested a great deal of time and effort to develop rubrics that directly measure what they value.
The work of the affinity groups was guided by the “Rubric of Rubrics” developed by the Director of the Office of Assessment and Planning, who facilitated the rubric development process among the interdisciplinary affinity groups. The “Rubric of Rubrics” was based on the USAFA Departmental Needs Assessment (University of Northern Colorado). For example, one criterion deals with the following: “Student learning outcomes have been written for this affinity group in a way that allows for meaningful assessment.” Other criteria include:
* The affinity group is fully engaged in assessing each of the student learning outcomes. Direct measures of learning (e.g., exams, papers, projects, portfolios) are used for each outcome and are supplemented by indirect measures (e.g., student feedback, surveys, focus groups) as appropriate.
* Assessment results are shared with all faculty members. Workshops and other forums exist such that faculty members openly discuss assessment results with one another.
* Assessment results are regularly used for the purposes of improving the courses in the affinity group. The affinity group is also in the habit of performing follow-up assessments to ensure that the improvements actually worked.
Since the affinity groups are interdisciplinary, students’ levels of critical thinking, for example, are assessed in such disciplines as anthropology, art, environmental studies, geography, history, media studies, philosophy, physics, psychology, and sociology. Thus, the process of developing rubrics is an open one with ample opportunities for faculty from across the campus commenting and editing.
The surveys and rubrics are part of the overall assessment plan to determine whether students’ progress along the learner outcomes of the Liberal Education Program (LEP) meet or exceed expectations. The underlying approach of the assessment plan is to assess student learner outcomes through multiple measures. Consider, for example, critical thinking. The critical thinking papers of sophomores and juniors are scored by faculty during reading day using the rubric developed by the Critical Thinking affinity group. In tandem, the critical thinking papers of first-year students and seniors are scored by faculty in other states as part of the Multi-State Collaborative to Advance Quality Student Learning (MSC). So, too, the Collegiate Learning Assessment Plus (CLA+) is administered to measure the level of critical thinking among first-year students and seniors. In 2016, seniors who had taken CLA+ as first-year students completed CLA+ once again in order to determine the extent of growth over time.
The two-year strategic planning process SCSU undertook in 2013-15 included perspectives from a wide range of stakeholder groups. An array of primary and secondary data was collected and considered. Examples of these data include the recently completed Student Success Task Force Report, an initiative of the President; assessment data related to student learning outcomes; and other measures of student success and engagement. Further, primary data collected from students included those from the Southern Experience Survey, which was developed on our campus to capture important data related to student success. Each of the subcommittees used a variety of data and assessment results to inform their recommendations which were then considered by the entire steering committee (which had access to all of the data). The data-driven strategic planning process culminated in the Steering Committee’s development of goals and objectives and recommendations.
Factors that are associated with good progress towards earning a degree are analyzed as well as the progress of a student body that needs more intensive support. For the class of 2011, factors associated with good progress toward degree completion, defined as having 90 credits or more, were: Feeling that Southern is a big part of their life, not experiencing trouble managing finances and other commitments, and not having math and/or writing problems. For the class of 2012, factors associated with good progress toward degree completion, defined as having between 60- 89 credits, were: Not experiencing trouble managing finances and other commitments, not having math and/or writing problems, and enjoying clubs, activities and support on campus. The students for whom math and organizing thoughts in writing are barriers to academic success comprise a student body that is served by the university’s Academic Success Center. This center was an outcome of a taskforce. The Student Success Taskforce had been established to find ways to strengthen students’ retention and graduation rates; to provide more strategic and proactive student advising; and to remove any obstacles that students may face as they move toward completing their degrees. The data reports informed the taskforce’s recommendations, including the creation of an Academic Success Center and a department of New Student and Sophomore Programs. Subsequently, data were used to determine the priorities of the Academic Success Center.
Since the establishment of the First-Year Experience (FYE), the first major change of the new general education program, a data-driven process has informed instructors work with their students. For example, the Beginning College Survey of Student Engagement (BCSSE) is administered to all students attending New Student Orientation. Prior to the beginning of the fall semester, the instructors receive individual BCSSE reports for each of their students. In this way, the instructors begin to know their students before they arrive on campus. During the semester, a locally-developed survey called the First-Year Experience Self-Assessment: Academic Habits of Mind and College Success is administered to all the freshmen. Instructors of the FYE seminar receive a report that presents the responses of the students in their section to those of the rest of the freshmen class. These user-friendly section reports guide the instructors in where changes to the curriculum need to be made as well as their work with individual students.
The implementation phase of the university’s undergraduate program review process was initiated fall 2008 following a two-year development and approval process. The plan called for a peer review process using the Undergraduate Curriculum Forum’s Program Review and Assessment Committee (PRAC) to review currently existing undergraduate programs in a five-year cycle. A necessary component of the self-study is the identification of learning outcomes and the use of data about student performance to make improvements to the program and to guide decision making. Programs provide a curriculum map or matrix demonstrating how the curriculum introduces and reinforces the skills, knowledge, and attitudes that students are expected to master. They discuss their strategies and methods for assessing student learning and/or performance (“How do you know your goals for student learning are achieved?”). They describe “how student learning data are used to improve student performance and how data are used to inform departmental discussions and/or initiate changes” to their programs. In addition, the self-study process includes an external site visit review.
Graduate programs have had a long-standing program review process on a five-year cycle overseen by the Academic Standards Committee of the Graduate Council. All graduate programs must submit a self-study that provides evidence of compliance with standards. For example, as part of the standard on student outcomes, the following description on the report template appears: “The purpose of this section is to demonstrate that the program has a clear, appropriate set of outcomes that both may be measured and that are regularly reviewed by the program with the aim of improving its curriculum.” The programs complete a table with three columns: student outcomes, methods of measurement, and where the outcomes are assessed for all students.
A key component of undergraduate and graduate academic program review entails faculty to reflect on data related to educational effectiveness. Data provided to the programs comes from the course and student census files, created at the end of the 3rd week of the fall and spring semesters. The demographic report includes counts of majors broken down by gender, ethnicity, and enrollment status, as well as the overall term GPA, and the number of degrees conferred. The graduation and persistence report tracks first-time, full-time freshmen and full-time transfer students over a ten-year period, calculating their one–year retention rate, four-year graduation rate, and six year-graduation rate, broken down by whether they stayed in their original major or changed to a different major. The summary by course type report breaks down the program’s section data into three categories: online, independent study/internship/thesis, and regular (all the other section types are aggregated in this category). The summary by course report breaks down the program’s section data by individual course. The course information report (CIS, also known as the student opinion survey analysis) analyzes by year the last five years of student opinion data.
The work of the Student Success Taskforce exemplifies the use of assessment and quantitative measures of student success to improve the learning opportunities and results for students. The creation of an Academic Success Center, as well as the hiring of both a director of new student and sophomore programs, and a coordinator of student financial literacy and advising, are among the developments that resulted from a report of the Student Success Taskforce. The aim of the Student Success Taskforce was to find ways to strengthen retention and graduation rates; to provide more strategic and proactive student advising; and to remove any obstacles students may face as they move towards completing their degrees. The faculty and staff members of the Student Success Taskforce asked the Office of Assessment and Planning to analyze data in order to answer 18 research questions. The questions included the following: What are the incoming characteristics (e.g., average SATs, need for remedial courses, high school rank, etc.) of students who graduate in 6 years or less from SCSU compared to those who do not? What are the most common trajectories of students who are not accepted into highly competitive programs (e.g., nursing)? Are there cohort differences and/or changes over time in students’ intention to graduate from Southern? The Office of Assessment & Planning found that cumulative GPA predicts student retention, but what variables amenable to change predict GPA? One assumes such variables as information literacy, time management, academic habits of mind, self-efficacy, etc. But do they? And if so, how do these variables change over time? Are such changes related to students’ majors, background variables, something else? Do these variables differ among cohorts? Between students starting at SCSU versus those who transfer in or out? What are the most important predictors of retention and graduation?
To respond to these questions, two “mirror” data reports were prepared. The first was a conventional data report with tables and charts. To complement this report, student sketches based on data were developed. The student sketches represent common subgroups within the SCSU student population, their characteristics, barriers, likelihood of success, and chances of retention. For example, the sketch of “Tommy” illustrates the background and experiences of a student who is thriving at SCSU, but who originally wanted to go elsewhere for school. In order to portray this group, data analyses were performed on students who had a GPA of 3.50 or above, and who had indicated that Southern was not their first-choice college on the New Student Orientation Survey. The “Ted” sketch is based upon analyses conducted on students from the incoming class of 2010 with a GPA above 3.0 who are still currently enrolled after six semesters. “Samuel” is a student from an urban area in Connecticut. “Penny the Pitcher” represents those who reported being student-athletes on the NSSE survey. “Maria” was uncertain whether she expected to graduate from Southern. “Gary” is a commuter, working 16-20 hours or more per week, and left within the first year. “Stella” is a commuter, at least 24 years old, and a part-time student. “Marissa” is a transfer student. These sketches aimed to highlight several predictors found to be important in the most current model of student retention at SCSU. The presented archetypes certainly did not cover the entire spectrum of students at Southern, but they did exemplify a few common student subtypes, and some of their experiences at the university. These sketches demonstrated how various predictors may present in certain groups of students. They also illustrated some of the challenges associated with studying retention, which are important to keep in mind when considering the results. In evaluating Ted and Maria, for example, it may appear that these two students are almost identical based on survey results, but their retention and graduation statuses will likely differ in the end. The student sketches exemplify an effective method of communicating data findings in an accessible manner.
Another effective strategy for communicating data findings is the widespread distribution of infographics. The infographics are designed by students working in the Office of Assessment and Planning. The infographics present the key findings of research studies (i.e., what the OAP director found interesting and important in the research results). All infographics are posted on the team collaborative site for the university community (the “confluence” site)
Research studies conducted by the Office of Assessment and Planning not only inform planning committees which are working to improve student outcomes, but also whether changes that were implemented yielded positive outcomes. For example, one of the major changes at the university in the last 10 years was the implementation of a new general education program referred to as the Liberal Education Program (LEP).
The ongoing evaluation of LEP begins at the course level. LEP defines goals for each of three groups: Competencies (what should an educated citizen be able to do?), Areas of Knowledge and Experience (what should an educated citizen know?), and Discussion of Values (with what values should an educated citizen be conversant?). In the LEP Document, "key elements" (learner outcomes) are listed for each component. Consider, for example, Technological Fluency. Key elements listed under this interdisciplinary area include “being cognizant of ethical and social implications of revolutionary technologies, including but not limited to their impact on security, privacy, censorship, intellectual property, and the reliability of information.” Every Technological Fluency course is expected to incorporate the key elements in the curriculum. Technological Fluency courses are taught by faculty in Computer Science, Physics, Communication, Education, Journalism, Music, Geography, and World Languages and Literature.
Thus, what links these individual courses together are the "key elements" (learner outcomes). The Technological Fluency interdisciplinary “affinity” group developed a shared, common rubric based on the key elements. Thus, the evaluation of LEP starts at the course level and then moves upward to the program level by merging all the individual course-level data to provide insight into how well the Technological Fluency “affinity” group meets expectations for positive student outcomes. Then the scores on the rubrics of the different affinity groups are then considered to determine the extent to which students’ learning and development meet or exceed the expectations embedded within the university’s LEP student learner outcomes. In this way, assessment at the university is vertically integrated.
The ongoing evaluation of the First-Year Experience (FYE) Program demonstrates horizontal integration as the program is the result of a collaboration between academic affairs and student affairs. During students’ first year of college, they complete the FYE Self-Assessment, submit papers to the Multi-State Collaborative to Advance Quality Student Learning (MSC), and take the Collegiate Learning Assessment (CLA+). In students’ senior year, they complete the Southern Experience Survey, submit papers to MSC, and take the CLA+. The Southern Experience Survey was developed by a taskforce comprised of faculty and staff from academic affairs and student affairs; the taskforce was jointly chaired by the Vice President for Student Affairs and the Dean of the School of Arts & Sciences.
The evidence collected is used to determine the progress of implementing the university’s strategic plan. The university’s 2015-2025 strategic plan was completed after a two-year campus-wide planning process. The progress monitoring of the strategic plan is scored on a four-point scale: (1) pending/not started, (2) started/minimal progress, (3) progressing, and (4) complete. Every objective under the four overarching goals are scored in this manner.