Evaluation Report
After conducting our usability testing and analyzing our data, the final step was to put an evaluation report together summarizing our findings and making our objective recommendations.
This section was informed by the works of Brown & Green (2015).
Does the instruction adequately solve the problem that was identified in the needs assessment and that resulted in the development of instructional materials?
Brown & Green, 2015, p. 172
The Final Report
Executive Summary
Our design team completed an evaluation of our design at the end of our project. After having identified the instructional need and conducting the necessary analyses over the course of 4 months, our team was able to develop and test a working prototype that is intended to be implemented as a feature within the Coinbase app. Below is our report on the success and feasibility of implementing our solution.
1. Audience
Who is the target audience for this program? We identified two distinct categories: one being our sponsor, Coinbase, and the other being our learners.
Primary
Our learners, who are in their 20s and 30s and young working professionals, are those who either use Coinbase services or would otherwise fit their user demographic. Because they are interested in making wiser financial decisions, they are one of the two audiences we have targeted.
Secondary
Our course, a secondary target audience would be our sponsor, Coinbase. Since they have identified what turned out to be an instructional problem, they have a relevant interest in the potential application of this program and how it can solve their organizational needs.
2. Purpose
What was the main purpose of conducting our evaluation? After having created our prototype, it was necessary to get it into the hands of potential users for testing so that we might be able to fine-tune and improve upon it.
Did we solve the Gaps?
​
We wanted to test whether the 3 main gaps we had identified (Knowledge, Skills, and Motivation) were, in fact, being addressed by this solution.
Did our design make sense?
​
We wanted to know if our prototype was not only intuitive in its use but that learners were able to connect the dots between its usability and its purpose. Did they find it to "work" for them?
As displayed on the previous page (Evaluation Plan), we created the following list of evaluation issues we wanted to tackle (listed as questions). We created both Formative and Summative evaluation questions depending for use during different parts of the Evaluation phase. The list of questions is reproduced again below:
3. Methodology
How did we go about conducting our evaluation? In specific, who were our participants, how did we categorize them, and what methods and instruments were used to gather the data?
Participants
​
Early on, we identified (through our learner analysis) that our users were all coming to the table with differing levels of prior knowledge. That meant that a one-size-fits-all solution would fail to leave most of them better off as a result of our training. Therefore, we selected our participants in the same way that we designed our solution: by prior knowledge levels.
Method
​
Using the Dick et. all evaluation method, we conducted usability tests with our users, encouraging them to use the Think Aloud Protocol in order to verbalize their opinions.
Instruments
​
We gathered data primarily from observing and recording our usability testers. Some of our team members chose to use audio recordings while others took notes by hand. We collated this data on a Miro board and used affinity mapping to isolate patterns.
4. Results
Our affinity mapping helped us organize the data in order to detect several repeating patterns. Below is a figure of the Affinity map after being organizer into their relevant clusters.
Findings
Difficult Pre-Test
While our test users understood the purpose of the pre-assessment at the beginning of the lesson, they all found it to be much too difficult.
Good Media Use
The visual nature of the media was a hit with our users, who enjoyed interacting with the interactive graphic, video, and eLearning scenario.
Knowledge Checks
The quick, short knowledge checks were great for keeping newly-learned facts fresh, but users wanted to see more feedback afterwards.
Rewards System
The gamification and rewards incentive structure were great for our users' motivation and engagement with using the prototype.
5. Conclusion
We based the following questions below on the two audiences that we were creating this solution for (our Sponsor and their Users). To that end, we would like to make the following recommendations based on how well this solution was able to address the needs of these two audiences separately.
Sponsor Needs
Does this solution present a relevant and cost-effective solution to the organizational problem that Coinbase identified?
Cost-Effective
This solution can be built into the existing app as a feature.
Increased Knowledge
This solution leads to improved knowledge and skills after its use.
Decreased Complaints
This solution holds users more accountable for their decisions.
Learner Needs
Did this solution address the 3 main gaps that we identified, namely the Knowledge, Skills, and Motivations gaps?
Knowledge Gap
This solution provides the necessary content to fill the knowledge gap.
Skills Gap
This solution allows learners to practice their learnings.
Motivation Gap
This solution is engaging and motivating for users.
Final Recommendation
​
With an overwhelmingly positive feedback pool for our test users, we believe this solution is worth the investment. Indeed, there are several parts of it that still require some improvements in later versions, but overall, it proves that our team was on the track. The cons of the solution are all cosmetic in nature; its foundations are very strong. With a few more iterations, this solution can become market-ready very shortly.