Below this introduction is an email from Michael Daly from rpk Group to questions I submitted on the feedback form. These answers confirmed my worst fears about the project. The answer that was most alarming to me is the answer to the question (bolded and highlighted) about the rationale for metrics. I mean, I can't imagine writing a research proposal that indicated I am going to measure two dozen metrics but I can't really say exactly what we will learn from them but know that all of them are equally important. None of the answers were very informative. Some of them are articulated verbatim on the FAQ on the UNCG site. But, I understand that my questions were ones that Michael could not go into detail on.
I was not able to attend the faculty forum at UNCG on March 20th (advising appointments), so all the issues below may have been addressed there. A brief introduction before the short email exchange. The dashboards that rpk is creating are very similar to those of a dashboard software marketed by the Educational Advisory Board (EAB) as "academic performance solutions." The Academic Performance Solutions dashboards were interesting and could be good tools for deans and department heads. I engaged with EAB quite a bit and investigated purchasing the software at two schools as provost over five years and I went through the process of investigating them twice at the University of Arkansas. We decided not to purchase the software (which is more than just software-they try to work with the institution's data like RPK), for two reasons. We had ERM software (Banner is the ERM at UNCG) that was developed in house many years ago and the data contained in that system was bad. UNCG's Banner ERM was also not developed to give integrated reports on academic metrics and lots of central data is junk because of how it was collected. Also, at Arkansas, I often tried to give the deans "dashboards" and tools, but I always had them check the data before I would "publish" anything. Every department had to run shadow systems to keep relevant data because they couldn't get it out of Arkansas' ERM- and their data was often much better than central data. That is pretty much what is happening at UNCG. I can confirm that is a very frustrating situation for a provost to not be able to get good reports out of the ERM, but it was the reality when I was provost at Arkansas and during my short tenure here. I don't think the data has become better at UNCG since January 2021, but I honestly don't know. Given the errors in the teaching performance index, it seems it has not changed. Second, most department chairs and heads at the University of Arkansas were already using data on class fill rates, dfw rates, in courses and by instructors, and some had started doing curricular complexity analyses that was developed as part of an APLU student success initiative where I was significantly involved and started streamlining curricula. The EAB web platform made it easier to view data quickly, but we thought that at the time it wouldn't provide anything new for Arkansas, because: 1) it was very expensive and the ROI did not seem strong given the quality of our data; and 2) most importantly, the deans and the heads felt the data was useless to them without benchmarks. EAB had been signing up institutions for Academic Performance Solutions over the 5 years I was in conversations with them. As they signed up more institutions, the number of institutions to benchmark against became larger, but at the time there were not enough relevant benchmarks to make the deans and department heads feel like the product offered would add any value, and I agreed with them. This does not mean that having the dashboards would not have helped make better decisions. And, there are more institutions to benchmark against now. But, one has to understand the data, know that is correct and be able to understand it in the context of peer institutions (not against itself at some previous time) to create useful dashboards, at least in my opinion. rpk has no benchmark data other than IPEDS. The other interesting thing to me from the answers to the questions below is that only academics is being examined in this kind of detailed way. Doing program reviews is always important, and certainly is important during a budget crisis, so I am not criticizing trying to do those, though I don't like the process. I was involved with several administrative efficiency and process reviews using the firm Huron, over my career. Huron is kind of the go to firm for these analyses. It is clear to me that UNCG has some costly and counter-productive inefficiencies in HR, finance, and facilities and possibly in other administrative functions. Huron is well known for making solid recommendations to eliminate those administrative efficiencies which can save substantial costs. They are also quite expensive. But, I find it telling that the only serious review of efficiency is being done on the academic side. I don't have much trust in the Chancellor's Task Force for Sustainability, and that might not be justified. But, my sense is that the administrative inefficiencies at UNCG (including in academic units) are quite large and not much progress has been made in fixing them-- and some have gotten worse (e.g, HR, finance). When problems are large and difficult to fix, then it is a good time to get some outside eyes, It would be quite expensive to hire a firm like Huron, it has probably been considered, but I think there would be more trust on the faculty level if we had consultants looking at efficiencies across the institution. And, the reason I think it would help create trust, is that there seems to be a narrative that faculty aren't working hard enough driving this analysis as much as the budget challenges. There has always been this narrative. It grew to be a larger narrative in Republican circles when Richard O'Donnell, connected to Governor Rick Perry's administration released this report. The report presented data basically reaching a conclusion that faculty workloads were the major source of cost inefficiency in Texas' universities. This led to the UT and A&M systems, particularly U. Texas-Austin and Texas A&M doing a detailed per faculty analysis of productivity and revenue generation, turning O'Donnell's analysis on its head. The detailed analysis showed on average, faculty generated way more revenue than they cost in this report from Inside Higher Education . At the time, based on memory, there was a general recognition of the kind of productivity analysis done by O'Donnell (and a paper by Richard Vedder in 2011) were wrong. Yet, that narrative still lives on in anti-higher education Republican circles and in places like the Wall Street Journal (Bob Shea sometimes gets his narratives from there), even when several detailed analyses showed it has been a false narrative. Another answer below that concerned me, is that Michael indicated that rpk's role is to find UNCG's data truth. Yet he also indicated that they plan to create draft dashboards before they have data reviewed at the unit level hoping to fix and evolve it later, In my experience once the dashboard is out there it is hard to change. For example, we know there are errors in the teaching performance index dashboard, but they have not been fixed and I believe the dashboard is still up for faculty and administrators to view. And, as I said above, some dashboards might be useful without benchmarks, but I think most won't. And, some data aren't really very useful, like the job market analysis, because they don't capture what jobs students go into and their trajectory after their first job. That analysis basically assumes that our curriculum should be tied to the job market, and, for example, that only business majors go into business. Or only health majors go into health. We know that is not true- albeit I don't know the data. I do know all of the national data that looks at income over a lifetime show that liberal arts and sciences majors generally catch up with many professional fields over a lifetime. See the old (2014) article Liberal Arts Majors Win in the Long Term published in Inside Higher Education. The Email Exchange- btw my response was not my best.. but for transparency this is all verbatim On Fri, Mar 17, 2023 at 10:17 AM Mike Daly <mdaly@rpkgroup.com> wrote: Hello Professor Coleman, Thank you for your inquiries to the project feedback form. Engaged and informed stakeholders are essential to a project such as this. We appreciate your asking these questions and the opportunity to respond. Below please find responses to your questions in red. In some cases, your questions may be modified and incorporated into the running FAQ document on the project's site. "Why is the Delaware Study not mentioned in the data context? Why are the metrics not aligned with Delaware Study metrics? If you are using another data base to compare peer programs, what is it?" rpk GROUP is working with UNCG to establish a single source of data truth that allows academic leaders to be as informed as possible about their departments and programs. The scope of the project does not include developing a new mechanism for comparing UNCG’s academic programs to its peers. {note from me: the provost apparently said at a forum that she does not trust Delaware Study data (she didn't explain why) and that institutions were no longer subscribing to it (implying b/c of trust issues). When I heard this, I did look at Delaware Study membership in 2022 and it was down. But, sometimes institutions go in and out of it because they don't need the data every year and it is expensive to be a member. Her comment about not trusting the data led me to reach out to four of my old provost colleagues who I looked up to as mentors (two are now Presidents at major public institutions, one is a system vp for academic affairs, and one moved into a vpr role; and one of them is kind of the dean of provosts at AAU and APLU because he has been in the role for like 15 years). None of them had heard any conversations among provosts that Delaware data was not trustworthy, and they confirmed, that in their opinion, Delaware Study data was still the best data to do benchmarking at a granular level. My sense from conversations I had when Bob Shea when I was provost, is Delaware Data aren't trusted because the benchmarking show that UNCG is actually more efficient than peer R2 university in the overall costs/credit hour in almost all departments and programs- and that went against Bob's narrative which is similar to the narrative in the O'Donnell report. I think, in the end, cost per credit hour at the department and program level is the metric to manage toward (number of classes per faculty is stupid because classes are not the same, and sch per faculty is stupid because we teach the size of classes are head assigns us to, and none of us were hired with a position description that are jobs were to sell credit hours.) If the provost wants lower costs per credit hour, she let deans and heads figure out how to do it. Trying to tie programs to labor market statistics placates politicians, but doesn't really help students "Some of the data are hard to get and irrelevant. I know politics want us to match program to job demand. But, the only data that matters if students get jobs and feel successful in those job. In five years, the top jobs will probably different, or require training in things we don't even know. How are planning to find the job data[?]." Projected labor market trends will be identified by cross-walking classification of instruction program (CIP) codes to standard occupation classification (SOC) codes. That crosswalk will be aligned with data made publicly available from the Bureau of Labor Statistics. "I happen to particularly concerned in efforts like this, how is each data metric going to be used. The list looks like throwing spaghetti on a wall and see what sticks- and I find that horribly inefficient and dangerous. So, I think that each metric should have a paragraph abut why this data was chosen and how this data is going to help make decisions. If the answer's "we won't know until we see it" or if duplicates other metrics getting at the same thing, then that metric should be deleted. I don't believe in more metrics is better. Multi-metric models rarely work in these sorts of program evaluations. So, do you agree or disagree with this and why? Will there be short paragraphs describing why each data metric is important and how it will be used?" rpk GROUP’s approach to understanding a diverse academic portfolio and academic departments is that no one data point is more important than another [bold is my emphasis- this is generally not a great way to talk about data in a community of scholars. As any new data definitions are developed, UNCG’s established Data Governance processes will be utilized to formalize those definitions. UNCG’s structured and on-going training for department chairs and other academic leaders will be utilized as an opportunity to provide an initial introduction to how participants understand and use the academic data dashboards to make decisions. "Are deans, heads, chairs and program directors going to get to view the data before anything is published? Again, as provost here, I can tell you that central data on academic performance and productivity is seriously flawed- departments have to keep their own shadow systems since Banner reporting is terrible. The dean/department level data is far better" The development of the academic data dashboards that are part of rpk GROUP’s engagement with UNCG are intended to move UNCG toward a single source of data truth. Opportunities will be offered to stakeholders to review the dashboards in their early development stages. It is the expectation that these dashboards will be refined over time, as informed by the user experience and UNCG’s needs. "Is the business model of RPK similar to the Hunter Group (the major consulting firm that were hospitals that were financially hemorrhaging -i.e.. to be the cost cutting experts for universities in financial crisis? Or, is RPK trying to compete with EAB's "Academic Performance solutions" rpk GROUP is a higher education consulting firm that partners with clients throughout the U.S. and globally, including two-year and four-year institutions, public and private sector institutions, membership organizations, and foundations. We specialize in sustainable financial models, strategic platform creation, and the financial model behind mission and equitable student success. "All of the metrics are academic metrics. Why are there no administrative metrics on performance, efficiency & cost? Having served as provost in three places there are several administrative functions that are broken and costly (particularly in opportunity costs)- HR is an example. I don't think that Student's First has demonstrated an ROI; mid-term grade reports and starfish reporting waste huge amounts of time. Starfish is a ridiculously bad early intervention system. Finance is a mess. Wil RPK look at revenue generating ideas that also help retention- e.g. amount of debt that triggers registration holds?" rpk GROUP’s engagement with UNCG includes providing data analytical support for the Chancellor’s Taskforce on Sustainability. That taskforce is focused on identifying potential opportunities to realize non-academic efficiencies in how UNCG provides services and supports to faculty, staff, and students. "Continuing from above Student affairs also has programs that are used by few students. Will there be dashboards for them? Will RPK examine the impact of the athletic fee on enrollment?" rpk GROUP will not be assisting UNCG in developing dashboards for student affairs. The impact of isolated fees on enrollment will not be part of rpk GROUP’s work. My response (not my best) [Dear Michael from rpk] I appreciate you taking the time to respond. You have confirmed my worst fears about the project-. Having been provost here, I can tell you that you will not get a single data truth at UNCG unless the data is reviewed before the dashboards are created at the department level. I know from experience that once a dashboard is created, it is much more difficult to examine and change the data. For example, a significant amount of data in the teaching productivity index is wrong. For example, the number 1 productive teacher in the dashboard is an instructor of record for a large number of lab sections and doesn't teach any of them. I was given 50% credit for a course that had co-instructors but the credit should have at least been 75-25. Had the department head seen the data, he would have changed that, because he was the co-instructor. In any case I appreciate your answers- but they were not very informative. I realize you are doing what you have been asked to do. Similar dashboards also can be made for student affairs functions ($/student' use of programs, etc); and certainly for HR (e.g processing time; $s per transaction) and facilities and finance (staff per student; cost vs market cost, revenue models for registration holds etc.); enrollment ($s spent/application; cost vs. yield rate for various activities). I feel comfortable guaranteeing that those kind of data will not be looked at buy a taskforce that communicates poorly- and only has partial expertise in examining cost and productivity issues in other units good luck with your work
0 Comments
Leave a Reply. |
|