It's a new emerging field and books like "Library Analytics and Metrics, using data to drive decisions and services" , following by others are starting to emerge.
Still the definition and scope of anything new is always hazy and as such my thoughts on the matter are going to be pretty unrefined, so please let me think aloud.
But why library analytics? Libraries have always collected data and analysed them (hopefully), so what's new this time around?
In many ways, interest in library analytics can be seen to arise from a confluence of many factors both from within and outside the academic libraries. Here are some reasons why.
Trend 1 :Rising interest in big data, data science and AI in general
Recently, I saw someone tweeting that Jim Tallman who is CEO of Innovative Interfaces declaring that libraries are 8-10 years behind other industries in analytics.
Well if we are, a big culprit is the integrated library system (ILS) that libraries have been using for decades. I haven't had much experience poking at the back-end of systems like Millennium (owned by Innovative), but I'm always been told that report generation is pretty much a pain beyond fixed standard reports.
As a sidenote, I always enjoy watching conventionally trained IT people come into the library industry and then hear them rant about ILS. :)
In any case, with the rise of Library Open service platforms like Alma, Sierra (though someone told me that all it does is basically adds SQL but that's a big improvement) etc more and more data is capable of being easily uncovered and exposed.
You don't even have to be a hard core IT person to drill into the data, though you can still use SQL commands if you want.
With Alma you can access COUNTER usage statistics uploaded with Ustat (eventually Ustat is to be absorbed into Alma) using Alma analytics. Add Primo Analytics, Google analytics or similar that most Universities use and a big part of the digital footprints of users is captured.
Want to generate users and the number of loans by school made in Alma? A couple of clicks and you have it.
Unfortunately there still seems to be no easy way to track usage of electronic resources by users as COUNTER statistics are not granular enough. The only way is by mining ezproxy logs which can get complicated particularly if you are interested in downloads not just sessions.
This is still early days of course, but things will only get better with open APIs etc.
Both assessment (understanding to improve or make decisions) or advocacy (showing value) require data and analytics
Academic libraries that do such studies in isolation are likely to experience less success.
A library focus on analytics also ties in nicely as universities themselves are starting to focus on learning analytics (with UK supported by JISC probably in the lead).
A lot of current learning analytics field focus on the LMS (Learning management systems) data, as vendors such as Blackboard, Desire2Learn, Moodle provide learning analytics modules that can be used.
In many institutions like mine it involves using alma analytics,Ezproxy logs, Google analytics, Gate counts and other systems that track user behavior etc.
This in many ways isn't anything new, though these days there are typically more of such systems to use and products are starting to compete on the quality of analytics available.
This type of activity can be opportunistic, ad hoc and in some libraries siloed within individual library areas.
Typically such dashboard can be setup for public view or more commonly for internal users (usually within-library, ideally institution wide) but the main characteristic is that they go beyond showing data from one library system or function (so for example a Alma dashboard or a Google Analytics dashboard doesn't quite qualify as a library dashboard the way I defined it here).
Remember I mentioned above that library systems are becoming more "open" with APIs? This helps to keep dashboards up-to date without much manual work.
I'm aware of many academic libraries in Singapore and internationally creating library dashboards using commercial or opensource systems like Tableau, Qlikview etc but they tend to be private.
Here are my google sheet list of public ones.
This type of activity breaks down barriers between library functions though it can still be siloed in the sense that it is just the work of a University Library separate from the rest of the University.
Such studies could be one off studies, in which case arguably the value is much less as compared to a approach like University of Wollongong's Library Cube where a data warehouse is setup to provide dynamic uptodate data that people can use to explore the data.
I've already mentioned Nottingham Trent University's engagement scores, where students can log into the learning management system to look at how well they do compared to their peers.
The dashboard also is able to tell them things like "Historically 80% of people who scored XYZ in engagement scores get Y results".
This type of analytics I believe is going to be the most impactful of all.
Hierarchy of analytics use in libraries
I propose that the activities I list above are listed in increasing levels of capability and perhaps impact.
It goes from
Level 1 - Any analysis done is library function specific. Typically ad-hoc analytics but there might be dashboard systems created for only one specific area (e.g collection dashboard for Alma or web dashboard for Google analytics)
Level 2 - A centralised library wide dashboard is created covering most functional areas in the library
Level 3 - Library "shows value" runs correlation studies etc
Level 4 - Library ventures into predictive analytics or learning analytics
Many academic libraries are at Level 1 or 2 and a few leaders are at level 3 or even level 4.
Analytics requires deep collaboration
This way of looking at things I think misses a important element. I believe as you move up the levels, increasingly silos get broken & collaboration increases.
For instance while you can easily do analytics for specific library functions in a silos way (level 1), by building a library dashboard that covers library wide areas would break down the silos between library functions (level 2).
In fact, there are two ways to reach level 2.
Firstly, libraries can go their own way and implement a solution specific to just their library. Even better is if there is a University wide platform that the University is pushing for and the library is just one among various departments implementing dashboards.
The reason why the latter is better is if there is a University wide push for dashboards, the next stage is much easier to achieve because data is already on the University dashboard and University wide there is already familiarity with thinking about and handling of data.
Similarly at level 3, where you show value and run correlation studies and assessment studies you could do it in two ways. You could request for one off access to student data (particularly you need cooperation for many student outcome variables like GPA, though there can be public accessible data like class of degree and Honours' lists) or if there is already a University wide push towards a common dashboard platform, you could connect the data together creating a data warehouse. The later is more desirable of course.
By the time you reach level 4, it would be almost impossible for the library to go it alone.
Should the library highlight one person who's sole responsibility is analytics? But beware of the Co-ordinator syndrome! Should it be a team? a standing committee? a taskforce? a intergroup? It's unclear.