The institution identifies expected outcomes, assesses the extent to which it achieves these outcomes, and provides evidence of improvement based on analysis of the results in each of the following areas: 3.3.1.1. educational programs, to include student learning outcomes.
Compliance Status
Louisiana State University and A&M College is in compliance with this principle.
Narrative
Louisiana State University and A&M College (LSU) ensures compliance with Comprehensive Standard 3.3.1.1 through strict adherence to the criteria expressed in the standard, including, for all educational programs, identification of measurable learning outcomes, development and implementation of methods for direct assessment of students’ achievement of outcomes, and use of results to improve student learning.
General Academic Planning and Assessment at the Institutional Level
The formal policies and procedures pertaining to educational outcomes planning and assessment constitute an integral component of the system of institutional planning and assessment that is described in the Compliance Certificate for Core Requirement 2.5 and, as such, involve the Office of Academic Affairs, the University Planning Council, the Institutional Effectiveness Review Board, the University Review and Assessment Council (URAC), the Faculty Senate, the administration of academic colleges and departments, the University Budget Committee, and the Office of Assessment and Evaluation (OAE). All educational outcomes at the degree program level derive from the Flagship 2020 goal of “a faculty-led and student-centered learning environment that develops engaged citizens and enlightened leaders” [1]. This general goal for student learning developed by the University Planning Council subsumes a wide variety of disciplinary and interdisciplinary outcomes, and the associated teaching and learning strategies for reinforcing students’ achievement of them. These educational goals operate within a context of well-defined procedures and policies enforced by two committees of the LSU Faculty Senate: Admissions, Standards, and Honors (ASH), and Courses and Curricula. The ASH Committee, for instance, not only formulates and monitors policies and standards pertaining to admission standards, but also “conducts continuous studies” and makes “recommendations designed to improve the standards of scholarship among students” [2]. In a similar manner, the Courses and Curricula Committee applies consistent evaluation criteria in the process of approving new courses and curricula or of adapting existing ones [3]. The Office of Assessment and Evaluation works with the provost’s office to conduct both general workshops and specific consultations designed to ensure effective articulation of educational goals and implementation of associated valid assessments, and URAC applies a four-step formal process in reviewing academic departments and programs in a seven-year cycle that includes attention to units’ implementation of student learning outcomes assessment and use of results to improve the general level of achievement.
Institutional Commitment to the Process of Learning Outcomes Assessment
In 2006 the University Assessment Council (UAC) proposed the implementation of a biennial reporting process through which, for each degree program, academic departments would document the implementation of assessment and interpretation of data and indicate programmatic changes based on the assessment. By May 2008, a review conducted by two-person teams of UAC faculty members demonstrated that approximately one-third of the degree programs had effectively satisfied the evaluation criteria. During the 2008-09 academic year those programs which were deemed not to be in compliance were directed by the provost to meet during fall semester 2008 with members of the UAC for specific feedback and to present an updated or “interim” report in April 2009. Between October 2008 and November 2009, faculty members from fifty-two of seventy-six academic departments met formally and exclusively—and, as needed, multiple times—with staff in the Office of Assessment and Evaluation, assisted by faculty members of the UAC, to develop measureable outcomes and associated valid assessment measures [4].
By 2008, review of degree program assessment formats revealed to the UAC that a biennial reporting obligation was too infrequent to provide teaching faculty with a level of consciousness that would result in consistent implementation of the learning outcomes assessment. Accordingly, the council proposed that all degree program outcomes should be assessed annually, during the academic year, with a corresponding annual assessment report due on October 15. While the UAC was transitioning to the annual cycle for all degree programs, the Program Review Council (PRC), after conducting its own parallel review of processes and guidelines, proposed significant changes to the program review process. A perceived value of a closer relation among program review and learning outcomes assessment led the Vice Provost for Academic Programs, Planning, and Review, in collaboration with both the PRC and the UAC, to collapse the councils into a new University Review and Assessment Council (URAC) for the 2010-11 academic year. Working through the Office of Assessment and Evaluation, the new council has continued the systematic evaluation of formats for learning outcomes assessment in the 230 LSU degree programs and associated concentrations, followed by required one-on-one meetings and general workshops to address deficiencies [5].
Criteria for Program-Level Student Learning Outcomes Assessment
The criteria chosen for determining the quality of student learning outcomes at the program level are
A set of measurable student learning outcomes that comprehends the categories of learning that constitute study in the program, as defined and described by the teaching faculty;
For each categorical outcome, multiple measures for determining the extent to which students in general are achieving the outcomes by the point of graduation;
At least one valid, direct assessment measure for each categorical outcome;
Assessment measures that are not based on course grades;
Documented/described processes for implementation of assessment measures;
Annual implementation of measures and posting of data by June 15; and
Faculty analysis and interpretation of data and posting of Learning Outcomes Report by October 15, including longitudinal consideration of results in relation to previous reports and indication of scheduled plans of action for improving student learning on the basis of interpretation of data [6].
Student Learning Outcomes Assessment in Academic Degree Programs
The Degree Program Assessment Cycle
The institutional format for assessment of student learning outcomes is structured chronologically around the steps in the annualized “LSU Degree Program Planning and Assessment Cycle.” The cycle is based on the three criteria of the comprehensive standard: (1) establishment of measureable learning outcomes at the program level; (2) assessment of “the extent to which” graduating students are achieving the outcomes; and (3) “evidence of improvement based on analysis of the results.” Each annual cycle corresponds to the academic calendar; for example, the fall 2011/spring 2012 academic calendar constitutes the 2012 Cycle. Typically, multiple assessment methods are implemented annually for each outcome, with the primary method designed to provide evidence of the level of achievement at the point of graduation. Members of the teaching faculty administer the assessments, interpret the data, and adjust aspects of program content and pedagogies accordingly. Administrative responsibility for each degree program’s adherence to the program’s individually described assessment format and to the cycle resides with the dean of the college or school in which a given degree or educational program is housed. The annual cycle is an integrated one in which professional staff in the Office of Assessment and Evaluation—acting on behalf of the Office of Academic Affairs and the University Review and Assessment Council—review the information for each program after the annual reports are submitted in October and provide feedback to the dean, who undertakes an independent review and provides feedback to the faculty through a formalized Dean’s Authentication Rubric in TaskStream. OAE feedback may come in the form of a rubric completed in TaskStream, in a detailed correspondence via e-mail, or in required one-on-one workshops with teaching faculty and/or chairs. Attachments 7, 8, 9, 10, and 11 show examples of OAE feedback to deans through a TaskStream rubric, based on Comprehensive Standard 3.3.1.1, with a score of 1 indicating not effective, 2 indicating effective, and 3 indicating highly effective [7] [8] [9] [10] [11]. Attachments 12, 13, 14, and 15 show examples of OAE feedback to deans through e-mail [12] [13] [14]. A visual representation of the cycle is available in attachment sixteen [15].
Documentation in TaskStream
As the LSU faculty developed an understanding of the requirements for program-level student learning outcomes assessment from 2006 through 2011, documentation of program assessment formats and results occurred in the LSU Assessment Matrix. Developed in-house, on the basis of the criteria in Comprehensive Standard 3.3.1.1, the matrix displayed structural layers of measurable outcomes and associated valid measures for assessing students’ achievement of them and included a format for attaching data files, rubrics, and outcomes reports. Attachment 7, showing the matrix entry for the BS in Nutritional Sciences, indicates the capacity of the matrix to represent a very basic but functional institutional approach to student learning outcomes assessment [16]. This system served the process effectively while the UAC worked to develop the teaching faculty’s understanding of the principle of learning outcomes assessment at the degree program level, along with strategies for effectively implementing it. Fundamental but primitive by current standards, the matrix by 2011 was becoming obsolete to its purpose as the sophistication of the faculty progressed along with the need to observe student learning processes longitudinally and to manage files and generate data through a single system.
In 2010 the Office of Academic Affairs and the URAC, acting on the advice of a task force, adopted the TaskStream Administrative Management System (AMS) as the platform for formal documentation of strategic planning, including student learning outcomes assessment. Since the 2011-12 academic year (2012 Cycle), assessment of student learning outcomes at the degree program level has been housed in the TaskStream AMS, specifically in a Degree Program Assessment Template for each degree program. The template includes program-level outcomes, assessment measures, data sets, and annual reports documenting ongoing evaluation of student learning and action plans based on interpretation of the data. The teaching faculty of a degree program is responsible for describing the outcomes and assessment measures, implementing the measures on an annual basis for each program-level outcome, interpreting the results with a view toward course and curricular changes that are likely to improve student learning, implementing the changes, describing the findings and action steps in an annual Learning Outcomes Report to the dean, and representing all aspects of the assessment in the TaskStream template for the program.
The text below shows the documentation for the first learning outcome listed in the TaskStream Data Sets module in the template for the BA in Political Science.
Outcome: Knowledge of Theories and Concepts.
Measures: Direct Indicators Using Exams and Indirect Indicators Using Graduating Senior Surveys
Program level: Direct – Exam: At the end of the spring semester, final exams in 4000-level courses of all graduating political science majors are collected. A sample of these exams are drawn and assessed by a panel of faculty members.
Details/Description: Faculty members assign scores for each learning outcome using the following scale: 0 = Unacceptable; 1 = Acceptable; 2 = Exceptional; NT = Not Tested. The results of this direct assessment are tabulated and included in the learning outcomes report. An indirect assessment of this learning outcome is obtained from a survey conducted of graduating seniors at commencement. The survey asks seniors to assess how well their course work met this outcome using the following scale: 0=poorly, 1=not very well, 2=well, 3=very well, and 4=exceptionally well. These results are tabulated and included in the learning outcomes report.
Acceptable Target: For the direct measure of assessment based on the faculty panel, an acceptable target is that 70% of students score at least a 1 (acceptable) on the three-item scale.
For the indirect measure of assessment based on the survey of graduating seniors, an average of 2.5 on the five-item scale is an acceptable target.
Ideal Target: For the direct measure of assessment based on the faculty panel, an ideal target is that 80% of students score at least a 1 (acceptable) on the three-item scale.
For the indirect measure of assessment based on the survey of graduating seniors, an average of 3.0 on the five-item scale is an ideal target.
Implementation Plan (timeline): The faculty panel meets annually to assess student exams. Graduating seniors are surveyed annually.
Key/Responsible Personnel: faculty
Supporting Attachments:
PDF Explanation of Scales Used in Assessing Each Outcome BA in Political Science.pdf (Adobe Acrobat Document)
Summary of Findings: Direct measure of learning outcomes: 92%
Indirect measure of learning outcomes: 3.17
Results: Acceptable Target Achievement: Exceeded; Ideal Target Achievement: Exceeded
Recommendations: The department faculty met to discuss the empirical findings from our assessment process. While the faculty is in agreement that the undergraduate program is working in a manner consistent with the department’s mission, there were some concerns raised during this exercise that require attention. For example, some faculty members expressed concerns that the assessment procedure did not provide adequate information for drawing conclusions about student learning. For example, the “acceptable” category used in coding graduating seniors’ exams was viewed as far too broad. Indeed this frustration is part of the impetus for the department adopting new procedures that will be implemented in the coming year with the results reported in fall 2013. Some faculty members commented that the results suggest that greater efforts are needed to include more undergraduate students in the research process. Engagement of students in research being conducted by professors might boost student achievement in this learning objective.
Reflections/Notes: The faculty is in agreement that the undergraduate program is working in a manner consistent with the department’s mission. Graduates are meeting the department’s expectations for achieving this learning outcome [17].
The full Degree Program Assessment Template for the BA in Political Science is included in attachment 17 [17], with supporting documents in attachments 18, 19, 20 [18][19][20]. The full templates for the 2012 Cycle of an additional 63 of the more than 230 LSU academic degree programs/concentrations are included in attachments 21 through 83 [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83]. These programs are not the “best” 63 programs, but, rather, were chosen at random from all of the baccalaureate, master’s, doctoral, and professional programs and thus are representative of the whole, necessarily demonstrating the range of quality but also demonstrating the full institutional commitment to student learning outcomes assessment. Information for the 2011 Cycle and earlier cycles is available in the LSU Assessment Matrix on the LSU institutional Website. Links to the outcomes reports of more than 60 degree programs for the 2011 Cycle are provided below.
The Dean’s Authentication Process
After receiving from the Office of Assessment and Evaluation an analysis of the implementation status for each degree program that reports to the college, the dean uses a carefully designed rubric to evaluate the planning and assessment of each program for the current cycle. The dean’s formal authentication, added to the cycle in 2011, has improved the quality of degree program assessment formats by underscoring the necessity for describing and implementing valid, ongoing processes for student learning outcomes assessment. Using an electronic rubric designed around the three criteria of 3.3.1.1, the dean reviews in detail the current status and quality of the format, including the Learning Outcomes Report for the academic cycle. The rubric includes a feedback component for each criterion of the comprehensive standard. Deans compare information provided in the current report to issues and concerns raised in previous reports in this process that is proving to be a highly effective aspect of the university’s general process for assessing student learning outcomes. The process facilitates deans’ meeting with degree program administrators and faculty to discuss the findings and strategies for using them to improve student learning in the program. Upon the dean’s recommendation, departmental administrators and teaching faculty meet with professional staff in the OAE to discuss adjustments to the planning and assessment format. During the following spring semester, the teaching faculty and department administrators plan curricular changes to be implemented during the next academic year. Examples of completed dean’s authentication rubrics are available in attachments 84 through 122 [84]][85][86][87][88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120][121][122].
The Learning Outcomes Report
A required Learning Outcomes Report for each cycle presents an analysis and interpretation of data generated through assessment measures implemented primarily during students’ final year of study. The cycle requires posting of the data files in the TaskStream Data Sets module by June 15. Members of the faculty responsible for the degree program subsequently meet to analyze and interpret the data and to post the formal report by October 15. The report highlights successes, indicates changes to the degree program templates (e.g., learning outcomes statements, revisions of assessment methods) and describes action plans based on interpretation of the data, particularly with respect to such curricular changes as those pertaining to teaching strategies and course design. Attachments 123 through 250 show examples of learning outcomes reports for the randomly chosen sample of LSU academic degree programs noted above, including examples from baccalaureate, master’s, doctoral, and professional programs and representative of the cross-section of degree program assessment templates in the TaskStream AMS [123][124][125][126][127][128][129][130][131][132][133][134][135][136][137][138][139][140][141][142][143][144][145][146][147][148][149][150][151][152][153][154][155][156][157][158][159][160][161][162][163][164][165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184][185][186][187][188][189][190][191][192][193][194][195][196][197][198][199][200][201][202][203][204][205][206][207][208][209][210][211][212][213][214][215][216][217][218][219][220][221][222][223][224][225][226][227][228][229][230][231][232][233][234][235][236][237][238][239][240][241][242][243][244][245][246][247][248][249][250].
Examples of the Impact of Degree Program Assessment
As noted above, the student learning assessment formats for all 214 academic degree programs are documented in TaskStream and available for viewing. This information includes the learning outcomes reports for the 2011 Cycle and the full assessment format and learning outcomes reports for 2012 Cycle. The assessment format and data for the 2013 Cycle are currently available, with the results of the required annual evaluation of the data due to be documented in TaskStream by October 15. Below are excerpts from learning outcomes reports that address the impact of learning outcomes assessment at the degree program level.
BS in Geology: The Learning Outcomes Report for the BS in Geology for the 2012 Cycle indicates that programmatic changes based on disappointing assessment results for the 2011 Cycle influenced improved student learning during the next cycle. Faculty increased the core requirement by four credit hours and added History of the Biosphere (Geol. 2061) as a prerequisite for Field Camp experience (Geol. 3666). The change appears to have made a dramatic impact on students’ “ability to conduct and to analyze field-based problems,” as all students met or exceeded expectations in the subsequent Geology 3666 course. The following excerpt is from the Learning Outcomes Report for the 2012 Cycle:
Executive Summary - Use of Results to Improve Program: The majority of the geology majors met or exceeded the threshold levels for all learning outcomes.
LO 4: Students will demonstrate their ability to solve complex geologic problems.
Assessment Process: The assessment instrument was completed for each BS Geology major in the 4000-level courses by the faculty member who taught each course. The instructor ranked each student’s ability in each category on a scale of 4 (excellent) to 1 (poor) for a required research project or paper, a term paper, or review paper, depending on how each faculty member designed the course. As individual faculty members assess this learning outcome within their individual courses, inter-rater reliability is a concern. The assessment process used to evaluate critical thinking skills will address inter-rater reliability by establishing a calibration procedure in which all of the faculty who are teaching 4000-level courses during the academic year assess the work of the subset of the students who are enrolled 4000-level courses offered during that academic year.
During 2011, assessment reports for 25 students were submitted. For each course, the number of students within each rating per category was determined (i.e., explanation of issues, context and assumptions, thesis/hypothesis, and conclusions). Those students scoring 1 (poor) and 2 (fair) did not meet the learning outcome. Those scoring 3 (good) met the learning outcome, and those scoring a 4 (excellent) exceeded the learning outcome . . . .
Results: Seventy-two percent of the students clearly and comprehensively explained an issue . . . ; 92% stated the context and assumptions clearly . . . ; 88% clearly stated the hypothesis to be tested . . . ; and 84% drew reasonable conclusions. . . . .
Use of Results to Improve Program: Based on our evaluation of student performance in 4000-level classes the previous year, we have made several changes to our program to improve performance at this level. We now carefully track student progress in the program to ensure that most students have completed the core curriculum and have the knowledge base they need . . . . This gives them a stronger background in the fundamentals in geology and should translate to a better performance at the 4000-level. We have increased all the 2000- and 3000-level core courses by one-lecture hour, making all these courses four-hour courses. This increases the time that students spend in the classroom in our critical core courses and better prepares them for taking 4000-level geology classes. In addition, we have increased the number of 4000-level courses to build a stronger knowledge base before graduation, from three to four, thus broadening their background at this level. The combination of all these changes has improved performance at the 4000-level.
Even though student performance meets or exceeds expectations, the faculty members need to address inter-rater reliability. Currently, each faculty member assesses the critical thinking of students in the course that the faculty member teaches. There has NOT been an attempt to determine if individual faculty are assigning similar rankings to student works. There is a potential inter-rate reliability issue. We will address inter-rater reliability by establishing a calibration procedure in which all of the faculty who are teaching 4000-level courses during the academic year assess the work of the subset of the students who are enrolled 4000-level courses offered during that academic year [159] [160].
MS in Environmental Sciences: The faculty instituted changes in the content of core seminar courses and in course requirements, with the result that the average score on the “acquisition of knowledge” outcome improved from 3.15 for students graduating in the old curriculum to 3.59 for those graduating under the revised curriculum. The following excerpt is from the Learning Outcomes Report for the 2012 Cycle:
The assessment matrix uses three learning objectives for assessment: (1) The M.S. candidate will have advanced knowledge and broad background of the core scientific principles and research methodologies necessary to address complex environmental challenges; (2) The M.S. candidate will demonstrate the ability to independently plan and carry out a thesis or professional team-research project; and (3) The M.S. candidate will be able to effectively communicate knowledge of environmental sciences and their research results both in oral and written form.
We use the approved direct assessment survey (Graduate Student Assessment Survey, the form can be found on the LSU Assessment Matrix website) after each student’s thesis or non-thesis defense. We ask the major professor and the committee members to fill out the assessment survey immediately after the oral defense, and we collect the survey data in a separate folder. We then average the responses from each committee member to determine the score for each student. The score ranges from 1 to 4, indicating a rank of weak, fair, good, or superior. Both the departmental graduate committee and the Chair evaluate the data and discuss the results for this report.
The table below shows the results of the assessment for this reporting period according to the three objectives.
Learning Objective (n=16) |
||
I: Knowledge & Research Methods: |
Mean: 3.59 |
Standard Deviation: 0.32 |
II: Research Ability: |
Mean: 3.64 |
Standard Deviation: 0.34 |
III: Communication Skills: |
Mean: 3.56 |
Standard Deviation: 0.37 |
An objective is met if 80% of our graduating students score 3.0 or higher. Of the 16 students assessed, two students had a score equal to 3.0 in Learning Objectives I and II (12.5 %), and the same two students had a score below 3.0 in Learning Objective III (12.5%). The rest of the students were assessed to be higher than 3.0 in all learning objectives. Based on these statistics of students graduated within the reporting period, we can conclude that all three learning objectives have been met.
Previous data indicated that the mean score for Learning Objective I (Knowledge & Research methods) was the lowest, which implies that most faculty considered the training of students in the broad background of scientific principles and research methodologies the weakest in a relative sense. This weakness has been addressed by changes in ENVS 7700 (Integrated Environmental Issues). The qualitative, written comments by the assessors (the faculty) also have stressed the point of having students be well versed in core scientific principles as well as research methods.
Use of the Results: In the Fall of 2008, the Department introduced a new curriculum, which requires students to take two courses from each of three priority areas: biophysical systems, environmental planning and management, and environmental assessment and analysis, plus one additional course, and the two core courses (ENVS 7700 and ENVS 7995 (Environmental Seminar)). The students who went through the new curriculum began graduating in spring 2010. All of the graduates represented in the current assessment graduated under the new curriculum, and their average performance with respect to the first learning objective was significantly higher (3.59 versus 3.15) than students who graduated under the old curriculum. Thus we feel that the transition to the new curriculum has produced desirable results. One of the core courses, ENVS 7700, has recently been modified to include more emphasis on experimental design and data analysis, with the expectation that this will further improve out students’ grasp of research methods.
The faculty as a whole is reasonably satisfied with the results of the assessment for this period, but we will continue to explore ways to improve our mentoring of students and the methods of assessment. The mean scores for Learning Objectives 1 and 2 were slightly higher this reporting period than the previous year (3.43 to 3.59 and 3.49 to 3.64, respectively). The mean score for Learning Objective 3 showed a slight drop (3.61 to 3.56). There is concern that some of our graduates are not quite achieving the level of communication skills we expect (two students this year scoring below 3.0 on Learning Objective 3). This fall, the faculty will discuss how we mentor students in this area. Changes in our Environmental Seminar course and investigation into methods to identify, as early as possible, students who may need extra mentoring to improve their communication skills will be considered.
We find our assessment matrix useful and our direct assessment instrument effective; hence there is no need for any substantial adjustment at this point. We plan to continue the use of the same instrument in the next several years so that we can maintain a consistent database with a large number of students [251].
BS in Natural Resources Ecology and Management: During the 2012 Cycle faculty revised rubrics after attaining unsatisfactory results in 2011 Cycle. Using the revised rubrics, they identified weaknesses in students’ acquisition of broad-based knowledge and, subsequently, redesigned the content of 4000-level courses, hired faculty specialists in aquaculture and conservation genetics, and decreased the size of the capstone course, with the expectation that assessment data in the 2013 Cycle will show a diminishment in the current gap between the actual and projected levels of learning. The following excerpt is from the Learning Outcomes Report for the 2012 Cycle:
Learning Outcome 1.1: Development of Knowledge
Analysis and Interpretation: During the 2011-2012 assessment period, 33 students completed the post-test. Improvement in scores between pre- and post-tests improved by an average of 40% or 13.2 questions of 78 possible during this assessment period. This represented a slight decline compared to an average 49.7% improvement in scores during the 2010-2011 assessment period and 60.7% improvement during the 2009-2010 assessment period. Based on the decline in improvement percentage of in 2010-2011 compared with 2009-2010, the faculty extensively revised the exams to reflect changes in the curriculum since 2007 that resulted from faculty departures and modifications made in response to past assessments, however, it was impossible that the revision of the exam played a role because students will not have completed both revised pre- and post-tests until 2016, at the earliest. Most likely, the declining scores observed in the 2011-2012 and 2010-2011 assessments were the result of course content unavailable to students because of faculty departures. Because we can track performance on individual questions that are linked to specific course(s), it is relatively simple to identify the areas where students score poorly. Investigation into the areas of student weakness suggest that vacancies in aquaculture (not filled until 2009-2010 academic year), conservation genetics (not filled until 2010-2011 academic year), freshwater fisheries/ichthyology (still vacant), and wildlife management (still vacant) resulted in students that did not have course experiences associated with these faculty members (e.g., RNR 2002, 3005, 4002, 4022, 4037, 4103, 4051, and 4151). Assessment of 2nd year students skills was meant to measure their progression in the program. Faculty expect a majority of the 16 evaluated skills will score a 3 or higher of a possible 5. During the 2011-2012 assessment, of 640 (16 metrics x 40 students) possible scores, only 42 (6.5%) fell below a score of 3. Therefore, faculty expectations were met.
Action Plan: Because post-test scores continue to decline, faculty meetings and discussions resulted in several potential solutions. The hiring of an aquaculture specialist and conservation geneticist over the last several years should eventually address weaknesses in aquaculture and conservation genetics areas. These faculty will address deficiencies in questions associated with RNR 2002, 4022, and 4103. In other words, this problem is anticipated to correct itself. However, because we have documented that faculty departures have a direct impact on test scores, the faculty have decided to take several proactive steps. The promotion of Dr. Rutherford to an administrative position in 2007 left no faculty associated with RNR 4037 – Biology of Fishes or RNR 4145 – Ichthyology. Initially, RNR 4145 was taught by a faculty member in Biological Sciences during 2008 and 2009 as BIOL 4145 (the course is cross-listed). The faculty member has indicated that he will only offer the course once every three years in the future. During 2010, the course was taught by teaching assistants under the supervision of Dr. Rutherford. RNR 4037 has not been offered since 2007. To address both courses, Dr. Green has agreed to teach RNR 4145 annually and RNR 4037 in alternating years in addition to his previously assigned courses. The departure of Dr. Chamberlain in 2011 also resulted in gaps in the curriculum. Dr. Stouffer has begun to assume some of Dr. Chamberlain’s teaching, specifically RNR 3018, in addition to his previously assigned courses, and Dr. Reed has agreed to continue to cover RNR 3005. We remain unable to address RNR 4051 and 4151 with existing faculty. The faculty believe that quickly addressing these gaps in the curriculum will prevent further declines in scores and potentially increase scores to 2009-2010 levels, which is the last time students could have completed the program with a full complement of faculty during their time at LSU. Ultimately, returning scores to 2009-2010 levels will likely require replacing the vacant freshwater fisheries and wildlife management positions. The faculty anticipate that test scores in 2014 should indicate whether these strategies will be successful. Skills development scores met faculty expectations, and the faculty did not believe that any additional changes to courses or curriculum were warranted.
Use of Results: The post-test results were extensive discussed among the faculty following their presentation at the August 2012 faculty meeting. The declining post-test scores in two consecutive assessments alarmed the faculty and confirmed suspicions about the long-term consequences of interruptions in courses caused by faculty vacancies. Faculty have assumed responsibility for RNR 3005 (Dr. Reed), RNR 3018 (Dr. Stouffer), RNR 4037 and 4145 (Dr. Green) to address weaknesses exposed in the post-test results.
Learning Outcome 1.2: Knowledge Across the Curriculum.
Analysis and Interpretation: Based on the 2010-2011 assessment, faculty revised our instrument and metrics to evaluate this Learning Outcome. Many faculty contributed ideas and ultimately, an existing rubric from New Century College was selected and modified to fit the outcome. Our adapted metrics included an evaluation of knowledge specific to the core curriculum in the B.S. in Natural Resources Ecology and Management, as well as, an additional metric evaluating knowledge gained from areas of concentration and general education courses. We believed that too narrowly focused students are at a disadvantage in their careers and in life, in general. Therefore, the metric included aspects reflecting our expectations of learning gleaned from general education humanities and social science courses, specifically ethical, philosophical, and social justice considerations. Similar to Learning Outcome 1.1., our expectation was that a majority of students would score a 3 or higher of a possible 5. During the 2011-2012 assessment period, we conducted 132 assessments of 33 students. No comparisons with previous years were possible because of the revision to the metrics. Only 6 of 132 assessments (4.5%) fell below 3 of 5. Overall, faculty expectations were met.
Action Plan: Although faculty expectations were met, the 6 scores of 1 or 2 were troubling. Faculty did not believe that these results warrant significant changes or revisions to courses or curriculum. These metrics were designed such that few students would score 4 or 5. Therefore, the metric distribution follows expectations. We did recognize that where lower scores occurred, these scores were related to breadth of knowledge. Therefore, two small changes in course offerings and scheduling (e.g., offering undergraduate version of waterfowl ecology and moving forest fire management to a more available time) should offer an increased diversity of experiences for undergraduates. Faculty will continue to monitor progress in this area, specifically when multiple years of data become available for comparison.
Use of Results: These results were extensively discussed among the faculty. Generally, the faculty were pleased with the results but were concerned about the lower scores in metrics concerning the breadth of knowledge. We believe that breadth of knowledge is very important because student career possibilities are very broad in this field. Dr. Rohwer volunteered to adapt an existing graduate course in waterfowl ecology for undergraduates, and Dr. de Hoop moved his forest fire management course to the spring semester to allow more students to take the course [252] [253].
BA in English: In 2008 the department revised the learning outcomes statements of its baccalaureate concentrations so that each outcome would be measureable, established and described valid assessments for each outcome, carefully described the process for implementing the assessments, clarified evaluation rubrics, and re-wrote the report to indicate the relation of capstone course content to degree program assessment. At the end of the fall semester 2009, the department posted the aggregate data for the capstone assessments in all four undergraduate curricula, along with changes approved by the faculty as a result of their interpretation of the results. On November 17, 2009, the University Assessment Council recommended that the teaching faculty in the department revise learning outcomes statements so that each would be unified and measurable; establish and describe direct, valid measures for each outcome; clarify evaluation rubrics and attach them to the degree program matrices; and indicate in the biennial report the relation of capstone course content to degree program assessment. In response to this feedback, the department articulated a revised direct assessment method in the LSU Assessment Matrix: Teachers of capstone courses fill out forms for each student at the end of the fall and spring semesters. Revised assessment forms provide weighted scores ascertaining how well students are meeting five outcomes. The individual student is given a score by the instructor for each of the outcomes: 3 Surpassed Outcome; 2 Met Outcome; 1 Failed-to-Meet Outcome. The compiled scores provide an aggregate view of general student performance and a view of how well each outcome is met. Results are tabulated and shared with all professors annually at meetings held the week before classes begin. As noted in the “Guidelines for English Department Capstone Courses,” discussion of the results “will give faculty members the information necessary to evaluate their own pedagogy and to consider appropriate curriculum adjustments and revision” [254]. At the end of fall semester 2009, the department posted the aggregate data for the capstone assessments in all four undergraduate curricula, along with changes approved by the faculty as a result of their interpretation of the results. On the basis of deficiencies disclosed by the assessments the faculty instituted a requirement that all students in the Literature Concentration “take at least one upper-level elective at the 4000-level” [255]. The following excerpt is from the Learning Outcomes Report for the 2012 Cycle:
Summary of previous year’s findings: As a result of assessment reports, we found in previous years that many students do not have the necessary skills in conducting research or in using secondary literature. We charged a sub-committee to recommend how components of research and literary criticism should be incorporated into the junior level required literary surveys, and other junior and senior level courses before the capstone. It was decided that research components would be added to 3000-and 4000-level courses, and this decision was implemented in 2010 through guidelines in the English Department Undergraduate Handbook [p.29]: “While students will not be expected to learn all research skills, students should be required to complete at least one research activity. Research activities might include: archival work, an annotated bibliography, a research proposal, or a full research paper.”
In faculty discussions in previous years, it was also decided that students should take a critical theory course prior to enrolling in a capstone course. This change was approved by the Department, and submitted to the Committee on Academic Planning and Program Evaluation (CAPPE), and should appear in the General Catalog starting in Fall 2012 [128].
The positive results obtained in the 2010-11 assessment of the Writing and Culture concentration are a preliminary indication that these changes are succeeding [256].
Additional Examples: The links to the random sample of 64 degree program assessment templates in TaskStream and to the associated learning outcomes reports show additional examples of the LSU teaching faculty’s analysis of results and use of the data to impact student learning.
Academic Program Review
An internal Academic Program Review process designed to ensure continuous improvement of academic units through systematic, cyclical review attends to degree programs’ strategic planning and evaluation processes, including formats for assessing student learning outcomes. The process is based on a seven-year cycle or, whenever possible, is conducted in conjunction with other timely reviews by discipline-specific accrediting agencies or by the Louisiana Board of Regents. The process spans a calendar year and may begin either on August 1 or in early January [257].
Administration of Program Review
The Office of Academic Affairs (OAA) formally collaborates with the University Review and Assessment Council (URAC) in the administration and practical implementation of a formal internal process for Program Review. Members are appointed by the executive vice chancellor and provost to serve three-year terms, and the council is chaired by the vice provost for academic programs, planning, and review or by the provost’s designee. One member with faculty rank from each college (a total of fourteen) and six additional designees from support divisions comprise the council, which includes six ex officio members from support units with responsibilities associated with planning and assessment. Responsibility for documentation of the council’s activities resides with the OAA.
By 1996 the institutional planning process at LSU included a sophisticated formal program review process with formal elements such as a self-study, the study and report of an internal review team, review by two external experts in the field, and a memorandum of agreement. Somewhat detached and too unwieldy to be sustained, the process was revised in 2010 after general evaluation by the University Review and Assessment Council, resulting in a strengthening of the institutional responsibility for the details of implementation and reporting. Below, a description of the components of the newer process is followed by excerpts from reports associated with the formal evaluation of educational programs in the Department of Agricultural Economics in 2011-12. Full documentation of program review going back fifteen years to the beginning of the previous process may be viewed on the LSU Website under the Department Resources icon.
Components of the Revised Internal Program Review Process
The Program Review Process includes the following primary components:
A self-study report by the academic unit;
Program evaluation and associated formal report by an external reviewer;
An internal panel evaluation of unit and report of findings and recommendations; and
The unit’s response to internal panel review, including an Action Plan (MOA).
Self-Study Report
The units prepare a self-study report that addresses accomplishments since the last program review; provides a listing of awards; provides information on faculty and/or staff, including research productivity, teaching enhancements, engagements, and partnerships; presents information on students; describes assessment of strategic goals, including assessment of student learning and curriculum or program changes; addresses challenges or barriers to achievement of strategic goals; and presents a strategic action plan. Accompanying appendices include the most recent strategic plan, annual reports since last program review, the annual Learning Outcomes Report, and the action plan (formerly memorandum of agreement) from the previous program review. The Office of Budget and Planning provides data and statistical analyses to support the department’s review of its educational programs.
External Reviewer
Input from external reviewers, a critical component of program review, provides an evaluative opinion from recognized experts in the field, ensures objectivity, and provides varied perspectives concerning the program's relationship to the discipline at large. Units provide the names of at least five proposed external reviewers (in rank order with “1” as the unit’s top choice) from peer or similar institutions. The list is reviewed by the URAC, which makes the decision on the selection of the reviewer. The external reviewer evaluates and formally reports on the unit’s self-study.
Internal Panel
An internal panel is selected for each program under review. The panel, which acts as a subcommittee of the URAC, is determined by the council with the concurrence of the executive vice chancellor and provost or by the provost's representative. The panel implements the program review and makes recommendations to the provost in a formal report. The process involves a careful review of the self-study report and of the external reviewer’s report; meetings with the unit’s administration, faculty, staff, and students; and, when applicable, a tour of facilities. The internal panel is chaired by an experienced senior faculty member who is not in the same unit as the program. Two additional faculty members also serve, and, if possible, one of the two is from the college of the unit and one is from a different college. An effort is made to ensure that at least one of the panel members has graduate faculty status.
Action Plan
The program review process concludes with an action plan, comprised of recommendations based on the internal panel and external reviewer’s reports, proposed actions by the responding unit, and a timeline for completing those actions. Progress toward accomplishing the action plan is documented in the unit’s annual strategic report and considered in the unit’s next program review.
NOTE: The narrative continues in "3.3.1.1 (Continued)."