Delivering the benefits All resources About this toolkit
About Procuring Preparing for and running
a tender exercise Engaging suppliers
and tendering Reviewing
tenders Estimating Total Cost
of Ownership (TCO) Managing relationships
with suppliers Resources
[Reviewing tenders] | [Testing and evaluating] | [Planning an evaluation event] | [Conducting the evaluation] | [Reference sites]
Planning an evaluation event
This type of evaluation event requires thorough planning on the part of the institution and the vendors but allows for better informed decision-making.
Tasks you will need to undertake include:
- setting the agenda;
- preparing your evaluation team;
- briefing the suppliers;
- deciding how to score the tests;
- managing logistics during the event (see the section on conducting the evaluation).
Setting the agenda
Your statement of requirements should be the basis for devising a scenario to be demonstrated or script to be followed.
You won't be able to cover everything in your requirements specification so you need to decide priorities which may include:
- the most critical elements of functionality;
- functionality where there is the highest volume of use;
- functionality that has been particularly problematic in the past;
- functionality that is important to your strategic objectives and could be a key differentiator for you as an institution.
Make sure you focus on what you want to achieve rather than say how you want to do it in a way that may be constraining for the suppliers.
Think about the type of examples that will best demonstrate the functionality to you so that you choose an appropriate mix of real course and module scenarios.
You will need to devise a detailed timetable which means allocating a certain amount of time to each element of the evaluation. This can be difficult to estimate when you are not familiar with the systems. You will already know that some tasks take longer than others and will have to use your best judgement based on what you already do.
You will need to allow some time for questions but, if your script has been well devised, you should not need detailed ad hoc questioning about every element.
Top Tips |
Don't forget to include accessibility as part of the testing regime. Suppliers registered overseas might not work to the same exacting standards that apply in the UK. The University of York has a renowned expert in this area, Helen Petrie, who works with a range of students with different challenges to test against their test scenarios. As a result of testing, the University of York identified issues for blind and partially sighted users during the evaluation process. These were raised with their preferred supplier who was able to address them and ensure that future releases meet UK standards.
Image CC By-SA 3.0 United States National Park Service
"Even when you've jumped through all of the other hoops you should do accessibility testing before you sign off. You have the company's attention before you sign so this is the best time to get action on areas such as this." Richard Walker, Head of E-Learning Development, University of York |
The evaluation team
Testing and evaluation is hard work. You need the right people involved and they need to be well supported:
- think of your assessors as a project team and invest some time in developing their abilities to critically assess and to work together;
- ideally the assessors should have been involved in defining the requirements - if they weren't, ensure they are well briefed on the thinking behind the requirements definition;
- consistency is important so make sure the same people evaluate all the products;
- ensure the assessors, and their managers, are aware of the time commitment required.
Briefing the suppliers
This type of evaluation involves a lot of work for suppliers as well so ensure you help them prepare adequately:
- allow them sufficient time for preparation - you need to decide what is appropriate: we talked to some institutions who deliberately kept the preparation time very short so that suppliers wouldn't demonstrate an overly 'polished' result that wasn't easy for their academics to replicate;
- think about what information they need to set up the test scenarios e.g. data relating to course/module structures, assessment types, dummy student data;
- if you are planning to run any parallel sessions e.g. testing end-user functionality and technical matters at the same time, the suppliers will need to be aware of this in order to have sufficient staff on site;
- prepare a standard testing pack with all of the information for each supplier;
- consider holding a briefing session for the suppliers to allow them to ask questions - this could easily be run as a webinar.
Top Tips |
Suppliers might try to negotiate with you about the timetable i.e. they want to spend more time on one thing and less on others. Some of the suggestions may be reasonable but take care that any changes don't compromise what you really wanted to see and that the evaluation remains fair to all of the vendors. They will each want to focus on the best aspects of their product and skip over the weaknesses.
|
"Usability tests and evaluation environments are good tools but come with their own challenges. A 'beauty pageant' attended by untrained users can often be of little value. You need to be very precise about what users are being asked to do and how they score it." John Usher, Senior Manager, Global Proposal Team, Blackboard |
Scoring the tests
You will need to decide how to score the tests. Your institution is crammed with experts who spend a lot of time designing marking rubrics so make sure you draw on this expertise to ensure your scoring mechanism is fair and consistent:
- you might choose a numeric score, e.g. marks out of 10, but a quantitative approach may 'smooth over' some important issues;
- you might try a more qualitative approach whereby a requirement might be: NOT MET, PARTLY MET, MET, OR EXCEEDED. This would require an accompanying rubric with definitions but would allow some flexibility in assessing grey areas;
- each assessor should complete an individual score sheet and scores compared at the end - where the team members agree the score can be taken as it stands but any differences need to be discussed and a final score agreed;
- where there are a lot of individual elements being assessed you might want to use a spreadsheet with macros to compare the scores automatically.
There will inevitably be some 'grey areas' with each product. The supplier may tell you that this functionality is being developed and will be in a future release of the product or it may be that by changing your processes the system could achieve the desired output.
It is up to you to decide how important these gaps are and how you will compensate for them. You will need a clear understanding of this before you can develop an effective implementation plan with a realistic budget.
Top Tips |
Decide how you will assess matters such as 'usability'. Edinburgh Napier University counted levels of hierarchy and numbers of clicks as well as undertaking some tests on page loading times compared with their existing system. There is more on the topic 'How do you assess usability?' in the section on Requirements gathering and prioritisation.
|
"When suppliers are given a free rein, you will end up seeing slightly different things using different data and it is much harder to compare.." Julie Voce, Head of Educational Technology, City, University of London |