Is anybody reading these docs?
A vital feedback loop to have in any documentation team is one that evaluates the success of documentation and answers questions like:
Are people reading the docs? Are they getting their needs met?
How do we quantify this and obtain the analytics that help to determine this? It's not always easy. Data such as time spent on a page can mean that the person was fascinated by the content and spent 4 minutes reading everything or the 5 seconds they spent on the page may mean that they found exactly what they were looking for. So how do you begin knowing what data to collect and how to analyse it?
So a broader question here is to know why collecting these metrics is important.
It may be that you want to demonstrate that tech docs are reducing support costs. In this case, how do you link the two probably separately functioning departmental data? I have found that support did not categorise calls according to the error or that it was not possible to query the data and generate reports that could be easily contrasted with relevant docs pages. They were more interested in data that was more relevant to their function such as time to resolve the query.
It was also difficult to know whether the support agent had thenselves used the docs to resolve a ticket? Or the ticket was never created because the customer found the documentation online. Getting meaningful insights from data is challenging. So it's interesting to find examples of where it has gone well.
Page viewsโ
Tracking page views is useful to identify which pages provide the most value to users in terms of their popularity.
Page views can reveal how successful a page is with users and when analysed alongside other data it can illuminate whether a particular topic has been addressed well and whether the navigation is effective and is aligned with user priorities.
Troubleshooting pages getting high views are a useful indicator of onboarding friction with a new product or feature. This issue could then be highlighted or fed back to the team perhaps with the aim of having a Product demo or sync up with the Support team.
Understanding the mental models and terminology of your users can play a huge part in understanding how they search for content. Sometimes, this can be at odds with the language you have to say perhaps for legal reasons. Here, unexpected traffic patterns can highlight the mismatch between your terminology and how users actually search for help. This can be useful information to work on mitigations or adopting your users' terminology if possible.
Dwell timeโ
How long people spend on your page can be a measure of success when looked at along with other data. For example, depending on the page content, a quick bounce rate indicates the required information was easily found. If it is a long form article then it is interpreted differently.
Trending pagesโ
Analytics for page views over different periods of time can help understand how user behaviour changes. For example, it might be that a popular page has mysteriously moved to a lower ranking; but is still relevant. In this instance, it makes sense to research why that might be the case. For example, have there been any changes to the user interface? Is it as easy to find as it was?
Internal link clicksโ
This kind of data is very helpful to determine how successful your current navigation is. A good rate of internal link clicks provides verification that documentation is aligned with the mental model of your audience and is anticipating their needs correctly. This is useful data to have when assessing the success of a user journey that is mapped to user personas as well. Ideally, this will nicely correlate with the work already done in understanding who your audience is and what their goals are.
Surveysโ
These can be great for getting some binary information about a page success. Did you find what you wanted to find? Yes or No.
A real limitation though is getting those responses in the first place. Unless a response rate is high enough, it isn't statistically significant enough to count as representative. So for example, if you have 10000 page views, the ideal you want to be getting is typically 10-30% for the data to be significant. The reality of that happening may be quite different but any less than this and your data is probably biased by people who are really cross that they didn't find the information they needed or really excited because they did.
PDF downloadsโ
Just measuring the downloads but don't know if people are opening it. What have they done with it? How meaningful is this?
Search termsโ
Understanding what terminology is being used to search your documentation is helpful to determine any content gaps or mismatches between terminology used by your audience and you.
What are the most consistently searched terms over time? What are the most frequently searched terms? What are the least-searched terms? What are the inconsistently searched terms over time? If you have enough search term data, you can start to contextualize the data with other types of information.
You can identify successful searches, where a search term led users to a relevant documentation page, and unsuccessful searches, where a search term led to no engagement or led users to an irrelevant documentation page.
Add meaning to your dataโ
Whilst your raw statistics provide clues as to the impact your documentation is having on users, it's a good idea to back this up by linking to your actual types of content.
- Task
- Reference
- Introduction
- Concept
- Tutorial
In doing so, your data is contextualised. For example, a Reference type of content may necessitate a short dwell until the specific piece of information is retrieved. A short dwell on a Tutorial indicates that it hasn't been followed by a user.
Add objectives for your content typeโ
Being clear on objectives for each piece of content makes it easier to determine success and understand what metrics you need to collect.
Success for a Tutorial means that a user has engaged with it. So how many minutes should that take. It should be possible to assess page dwells in relation to the time it takes to follow the tutorial. Contrasting this with the page views gives you an idea of how popular it has been with users.
For an SDK Reference, success looks different. It might have a quick bounce rate showing that the information was found quickly. It might be left open whilst a user writes a script utilising it and have a longer dwell rate. Here the number of page views is helpful and the page dwell a useful indicator of how the page was used in the field.
A Landing page for a topic benefits from another approach. A measure of success here would be the internal links clicked rate. This would show the relevance of the linked pages matching the mental model of your users.
Conclusionโ
- Decide on objectives of a page when it is created.
- Decide on metrics to capture.
Be aware that metrics don't tell the whole story. Consider user research and interviews to gauge real impact.