Expert answer:CSE463 Central Florida College Heuristic Evaluatio

  

Solved by verified expert:Prototype can be found in this link: https://a2qppm.axshare.comPart 1: Heuristic Evaluation, instructions: The purpose of this exercise is to give you experience conducting a Heuristic
Evaluation (HE). You will conduct an HE on your hi-fi prototype from Assignment #3. Step 1: Inspect
each element against Nielsen’s 10 heuristics. Identify as many negative aspects as you can. For each
identified violation of the heuristics, record ID, name, and heuristic, and combine into a table format
(column #1: ID, column #2: Name, column #3: Heuristic). Step 2: Summarize your findings as Usability
Aspect Reports (UARs). Each “negative” UAR should include ID, Name, Evidence, Explanation,
Severity (rating and justification with respect to frequency, impact and persistence – you must discuss all
three to get full points), and Solution. Also, capture a couple positive aspects that you feel stand out. Each
“positive” UAR should include ID, Name, Evidence, and Explanation. Include a screenshot with each
UAR for easy reference.
Deliverable: Include a link to your hi-fi prototype along with a report with results from Step 1 (table) and
Step 2 (UARs with screenshots). Part 2: Mental Models, instructions: The purpose of this exercise is to give you experience designing for a clear
mental model. Identify three Metaphors, Natural Mappings of Control, Virtual Affordances,
Feedforwards, AND/OR Feedbacks (i.e., three total, not three each) that could potentially improve your
hi-fi prototype from Assignment #3. For each identified improvement, record: Name (type of
improvement, i.e., Metaphor, Feedforward, etc.), Description (a detailed paragraph explaining the
modification of your design, which could be related to a single screen of your app, or the entire app),
Motivation (why this change is needed—describe your motivation in detail relative to either narrowing
the Gulf of Execution, Gulf of Evaluation, or both, and clearly explain how your improvement will better
align the user’s mental model with the developer’s mental model), Heuristics (list all Nielsen’s heuristics
that your improvement matches), Screenshot (provide a screenshot of the specific screen from your app
for easy reference; if your improvement covers your entire app, include a screenshot of a screen that is a
good representative).
Deliverable: A report identifying three improvements toward designing for a clearer mental model. Each
identified improvement should be summarized using the format Name, Description, Motivation,
Heuristics, and Screenshot, as previously described.I have attached some lecture power points encase you’d like more info on how to do the Heuristic evaluation with the UAR’s and Mental models.Lecture 2 for UAR’sLecture 9 for Heuristic evaluationLecture 11 for Mental Model
lecture_9___expert_review.pptx

lecture_2___thinkalouds.pptx

Don't use plagiarized sources. Get Your Custom Essay on
Expert answer:CSE463 Central Florida College Heuristic Evaluatio
Just from $10/Page
Order Essay

lecture_11___good_design_ii.pptx

Unformatted Attachment Preview

CSE 463
INTRO TO HUMAN COMPUTER
INTERACTION
Lecture 9: Expert Review
Troy McDaniel
June 11 2019
1
Today’s Class
• Lecture on Expert Review
• Remaining reading assignments #7 and #8
– https://www.nngroup.com/articles/how-to-conducta-heuristic-evaluation/
– http://designingwebinterfaces.com/6-tips-for-agreat-flex-ux-part-5
2
How do you evaluate your designs in order
to improve them?
Ideally, at this stage, you know you’re developing a
useful product, and you also understand which
features make it desirable. We are now ready to
understand and improve usability.
3
Different Evaluation Methods
• Empirical Methods
– Observation.
– Experimentation.
• Analytical Methods (Expert Review)
– Derived from physical, psychological, sociological, or
design theories.
– Heuristics derived from experience.
• Generally use analytic before empirical
4
Expert Review
• “Discount usability engineering methods”
–Jakob Nielsen
• “Discount” meaning efficient in terms of benefitcost ratio.
• Usually a small team of evaluators using
analytical methods to review an interface based
on recognized usability principles.
5
The Expert
• Who is the expert in expert review?
a) The user, because he or she knows about the
domain and tasks.
b) The user, because he or she knows about how the
application can be redesigned to make things easier
for them.
c) The design team, because they know about the
domain and tasks.
d) The design team, because they know about good
interface design.
6
The Expert
• Who is the expert in expert review?
a) The user, because he or she knows about the
domain and tasks.
b) The user, because he or she knows about how the
application can be redesigned to make things easier
for them.
c) The design team, because they know about the
domain and tasks.
d) The design team, because they know about good
interface design.
7
Implications
• Evaluators do not need to have the same
characteristics as your target users.
• Evaluators can ask questions about the interface.
8
Heuristic Evaluation
• Developed by Jakob Nielsen (in early 90s) to find
usability problems in a UI design.
• Small set (3-5) of evaluators examine UI
– Independently check for compliance with
usability principles (“heuristics”).
– Different evaluators will find different problems.
– Evaluators then aggregate findings.
– Evaluators write report, or observer records comments, then
generates report.
• Can perform on working UI, prototypes and even
sketches.
9
Nielsen’s 10 Heuristics










H1: Visibility of system status
H2: Match between system and real world
H3: User control and freedom
H4: Consistency and standards
H5: Error prevention
H6: Recognition rather than recall
H7: Flexibility and efficiency of use
H8: Aesthetic and minimalist design
H9: Error recovery
H10: Help and documentation
10
H1: Visibility of System Status




Are users being kept informed?
What input has been received?
What process is the application doing?
Are the results displayed?
11
H1: Visibility of System Status
Which example has better system status visibility?
12
H1: Visibility of System Status
• Progress & state
information
• Little information
13
H2: Match between system and real
world
• Speak the user’s language.
• Follow real world conventions.
14
H2: Match between system and real
world
• Leverage what’s
natural.
• Scaffolding.
• Don’t break the
metaphor (e.g.,
dragging disks to the
trash to eject it).
15
H3: User Control and Freedom
• Provide a clear way to navigate.
• Undo, back button.
• Don’t force fixed paths.
16
H3: User Control and Freedom
Example of search that supports user control and freedom in
terms of: easy to open, enter info, execute or cancel.
17
H4: Consistency and Standards
• Same words, situations, and actions mean the
same thing across applications.
• Follow platform conventions.
18
H4: Consistency and Standards
Which example is more consistent across screens?
19
H4: Consistency and Standards
• Is consistent
• Isn’t consistent
20
H5: Error Prevention
• Careful design to prevent problems from
occurring in the first place.
21
H5: Error Prevention
Which is an example of good error prevention?
Autofocus on input
22
H5: Error Prevention
All of the above.
Autofocus on input.
• Indicate the
primary action.
Autofocus on input
23
H6: Recognition rather than recall
• Make objects, action, and options visible.
24
H6: Recognition rather than recall
Which is an example of recognition rather than recall?
25
H6: Recognition rather than recall
Both are!
• Preview fonts.
• Type ahead for
coding.
26
H7: Flexibility and efficiency of use
• Accelerators for experts.
• Tailor frequent actions or objects.
27
H7: Flexibility and efficiency of use
Which is an example of flexibility or efficiency of use?
28
H7: Flexibility and efficiency of use
Again, both are.
• Shortcuts
• Accelerators
29
H8: Aesthetic and minimalist design
• Extraneous information in an interface competes
with relevant information.
30
H8: Aesthetic and minimalist design
Which example is more difficult to understand?
31
H8: Aesthetic and minimalist design
• Cluttered.
• Uses principles of
visual design.
32
Principles of Visual Design
• From:
http://designingwebinterfaces.com/
6-tips-for-a-great-flex-ux-part-5
• Visual layout should respect the
principles of contrast, repetition,
alignment, and proximity.
• Kontain’s search menu exemplifies
the four principles of visual design:
– Contrast: bold text is used for the two
labels in the search
– Repetition: the orange, blue, and
green text match the media types
– Alignment : strong left alignment of
text, right aligned drop down
– Proximity: a light rule is used to
separate tags from the other options
33
H9: Error recovery
• Help users recognize, diagnose, and recover from
errors.
• Solution-oriented.
34
H9: Error recovery
• Which example is more informative?
35
H9: Error recovery
• Informative
• Uninformative
36
H10: Help and documentation
• Easy to search and find.
• Always available and task-oriented.
37
H10: Help and documentation
38
Question I
• What makes an expert in the expert review?
39
Question I
• What makes an expert in the expert review?
– Can be you.
– The design team.
– External people.
Typically done internally.
40
Question II
• How many evaluators?
41
Question II
• How many evaluators?
Sweet spot is 3-5. You’re still going to catch a lot of
problems using this procedure yourself on your
own interface.
42
Evaluators & Problems Founds
From:
https://www.nngroup.com/articles/how
-to-conduct-a-heuristic-evaluation/
“Each row represents one of the 19
evaluators and each column represents
one of the 16 usability problems. Each
square shows whether the evaluator
represented by the row found the
usability problem represented by the
column: The square is black if this is the
case and white if the evaluator did not
find the problem. The rows have been
sorted in such a way that the most
successful evaluators are at the bottom
and the least successful are at the top.
The columns have been sorted in such a
way that the usability problems that are
the easiest to find are to the right and
the usability problems that are the most
difficult to find are to the left.”
43
Heuristic Evaluation Procedure
• Step 1: Brief the group
• Step 2: Evaluate individually
• Step 3: Aggregate issues
• Step 4: Apply severity ratings
• Step 5: Summarize findings
44
Step 1: Brief the Group
• Domain briefing
– Important if evaluators are unfamiliar with the
product’s domain (if system is domain–dependent).
• Scenario briefing
– Can optionally include specific tasks or typical usage
scenarios (with steps) that users take
• If system is a “walk-up-and-use” interface for
general population or if evaluator is domain
expert, can use system without further
assistance.
45
Step 2: Evaluate Individually
• Two passes
1. Inspect flow (and optional tasks/scenarios) – this
allows evaluator to a get a “feel” for the interaction
flow and general scope of system.
2. Inspect each element against heuristics (while
knowing how they fit into larger whole from first
pass).
• ID, name, heuristic
– ID: -HE-##
– Name: succinct description
– Heuristic: HE-10
46
Step 2: Evaluate Individually
• Practice activity (not graded): Identify the heuristic for
each issue below.
ID
Name
cda-HE-09 No feedback during image upload process
cda-HE-10 File size instructions use jargon
cda-HE-11 Upload error message provides no guidance
cda-HE-12 File navigator starts from root folder every time
cda-HE-13 Image upload requires users specify file type





H1: Visibility of system status
H3: User control and freedom
H5: Error prevention
H7: Flexibility and efficiency of use
H9: Error recovery
H2: Match between system and real world
H4: Consistency and standards
H6: Recognition rather than recall
H8: Aesthetic and minimalist design
H10: Help and documentation
47
Step 2: Evaluate Individually
ID
Name
Heuristic(s)
cda-HE-09 No feedback during image upload process
H1 Visibility
cda-HE-10 File size instructions use jargon
H2 Match
cda-HE-11 Upload error message provides no guidance
H9 Recovery
cda-HE-12 File navigator starts from root folder every time H7 Flexibility
cda-HE-13 Image upload requires users specify file type
H6 Recognition
48
Step 3: Aggregate Issues
• Done as a group.
• Read issues in turn, consolidate into a list.
49
Step 3: Aggregate Issues
ID
Combined name
No feedback during image
HE-12
upload process
HE-13
OK and Apply button
perform same action
New entries appear above
HE-14 viewable area, user must
manually scroll to see them
Email addresses must be
HE-15 added manually from
memory
Heuristic(s) Evaluator(s)
H1 Visibility
cda-HE-09,
ljd-HE-02,
ht-HE-04
H4
Consistency
ljd-HE-03,
sh-HE-11
ljd-HE-06,
H1 Visibility
sh-HE-02,
ht-HE-04
H5 Error
ljd-HE-07,
prevention, H6 cda-HE-04,
Recognition
ht-HE-01
50
Step 4: Apply Severity Ratings
4 Catastrophic
– Product cannot be released
3 Major
– High-priority issue
2 Minor
– Good to fix when there’s a lull
1 Cosmetic
– Icing on the cake (these rarely get done)
0 Not a problem
– I don’t agree that this is a problem at all
51
Step 4: Apply Severity Ratings
• Justification:
– Frequency: Common or rare occurrence?
– Impact: How bad is it? How hard to recover?
– Persistence: One-time problem users can work
around or unavoidable problem?
• For each issue, average the rating from each
evaluator.
52
Step 5: Summarize findings
• Make UARs
– Audience: primarily developers.
– Specific and convincing.
– Compiled in final report’s appendix or entered
directly into bug tracking system.
• Write executive summary
– Audience: project manager, team leads.
– Look for the forest in the trees.
– Consider affinity diagramming.
53
Advantages of HE





Cheap, fast, easy.
Don’t need to identify tasks, activities.
Can identify obvious fixes.
Can expose problems user testing doesn’t.
Provides a shared language for talking about
usability recommendations.
54
Disadvantages of HE
• Inconsistent.
• False alarms — problems unconnected with tasks.
• May be hard to apply to new technology.
55
Question III
• How should you narrow down the results of the
aggregate table to determine what to focus on?
56
Question III
• How should you narrow down the results of the
aggregate table to determine what to focus on?
Apply severity ratings.
57
Question IV
• What about positive aspects?
58
Question IV
• What about positive aspects?
Should list all negative aspects.
Don’t need to list all positive aspects.
Can be useful to list the best ones.
59
CSE 463
INTRO TO HUMAN COMPUTER
INTERACTION
Lecture 2: Thinkalouds
Troy McDaniel
May 21 2019
1
Today’s Class
• Reading assignment #2
• Introduction to think-alouds
2
HCI: Two Key Elements
• Iteration at every stage
– We must continually strive to learn and improve.
• Go to our user
– The user is not like me.
– Example from my own work in stroke rehabilitation.
3
Go to the User I
• What is harder to solve for high school Algebra students?
• Story Problem
As a waiter, Ted gets $6 per hour. One night he made $66
in tips and earned a total of $81.90. How many hours did
Ted work?
• Word Problem
Starting with some number, if I multiply it by 6 and then
add 66, I get 81.90. What number did I start with?
• Equation
x * 6 + 66 = 81.90
4
Go to the User II
5
Go to the User III
• “The user is not like me.”
• Keep users involved throughout
the design
– Understand work process.
– Get constant feedback.
– Co-design sessions
• Another example from my own
work.
• “User-centered” design mind-set
– Thinking of the world in users’
terms (empathy).
– Not technology-centered /
feature driven.
– Think of benefit to users.
6
One Technique: Think-aloud
• Participants verbalize their thoughts as they
work:
– On a task that is interesting or important
– Using a system or prototype you want to improve
– While you observe
7
Users
• You are designing a mobile application to help an
interior decorator visualize furniture
arrangements, and want to conduct a user study
with a prototype. Who should you ask to
participate:
– A friend
– A user experience (UX) designer
– An interior decorator
– Someone who has decorated their own house
recently
8
Users
• You are designing a mobile application to help an
interior decorator visualize furniture
arrangements, and want to conduct a user study
with a prototype. Who should you ask to
participate:
– A friend
– A user experience (UX) designer
– Correct answer: An interior decorator
– Someone who has decorated their own house
recently
9
Tasks
• Authentic tasks are important.
• Can be better to choose combinations of tasks
rather than single fragmented tasks.
10
Prototype
• Anything users can interact with:
– Paper prototype
– Digital prototype
– Wizard of Oz system
– Working system
• Could use competitor’s system, or system you’re
trying to redesign in early stages of design.
11
Formative Evaluation
• Evaluate
– As you build
– Frequently
– Can be informal
• Hypotheticals are not very useful.
• More useful:
– Real users
– Real tasks
12
Think-alouds
• “… may be the single most valuable usability
engineering method …”
– Nielsen, 1993, Usability Engineering p. 195
• Useful at almost all stages of design and
implementation.
• Pioneered by Allen Newell and Herb Simon in
1970s.
13
Instructions I
• “I’m going to ask you to think aloud. What I
mean by ‘think aloud’ is that I want you to tell
me everything that you are thinking from the
time that you see the statement of the task until
you finish the task. I would like you to talk aloud
constantly from the time I give you the task until
you have completed it.”
14
Instructions II
• “I won’t be able to answer questions, but if
questions cross your mind, say them aloud”
• “If you forget to think aloud, I’ll say please keep
talking”
• Essentially, ask users to tell you what they are
thinking: What they are reading, what they are
trying to do, and questions that come to mind.
• Any volunteer to perform a think-aloud to
answer question on the following slide?
15
Sample Task
• How many rooms are in the house where you
grew up?
16
Theory
talk aloud
think aloud
verbalize
linguistic
contents of
working
memory
linguistic and
nonlinguistic
contents of
working
memory
task impact
minimal
slows down
performance
17
Task Impact
• In general, think-alouds slow task performance
down.
• Therefore, use caution if you plan to combine
think-alouds with user studies aimed at
gathering bottom-line data, such as task
completion times.
• General rule of thumb: Separate process data
gathering from bottom-line data gathering
18
Example
• How many rooms were in your childhood home?
• If you verbalize nonlinguistic elements (e.g.,
moving through the house), it’s a think-aloud.
• If you simply count, it’s a talk-aloud.
• More examples
– Web example: https://www.youtube.com/watch?v=g34tOmyKaMM
– Software example: https://www.youtube.com/watch?v=nJ2udLjdsx4
– App example: https://www.youtube.com/watch?v=y4k2vYoOXO0
19
Critical Incidents
• When thinking aloud, users are trying to
accomplish specific tasks.
• When observing a think-aloud, you are looking
for critical incidents that point to a user’s success
or failure with respect to their tasks.
• More on this later…
20
Good Data from a Think-Aloud
• “I want to do…”
• “I’m looking at the UI, and I think it does…”
• ‘Hmm, that’s not what I expected; I thought it
was going to…”
• “That took longer than I expected.”
• What might be some other examples?
21
Think-aloud Practice
• Let’s practice think-alouds (this is not a graded in-class
activity)
• Get in groups of 2.
• Pick one of you to be the researcher and one of you to
be the user.
• Have the user think-aloud as he or she uses a website,
app, etc. of the researcher’s choice (it could even be
your own website or app you’ve developed), based on
one of the tasks the researcher prepared.
• If you finish, switch roles.
22
Discussion
• Would any team like to share:
– What happened during your think-aloud?
– What did you learn about the website, app, etc.?
– Would you do something differently the next time
you try this?
23
Discussion II
• As the researcher, was it:
– Difficult to not answer questions?
– Challenging to get your participant to speak?
– Did you encounter any other difficulties?
– Did you have to step in to help?
– Did you feel you unintentionally distorted results?
24
Discussion III
• Why don’t we answer questions?
25
Discussion III
• Why don’t we answer questions?
– Could distort results (bias user’s feelings/thoughts,
redirect their attention, etc.)
– More on this later
26
Reading Assignment #1 Recap I
• “… best test users will be people who are
representative of the people you expect to have
as users.”
• “If it’s hard to find really appropriate test users
you may want to do some testing with people
who represent some approximation to what you
really want…”
• What happens when we don’t use real users?
27
Reading Assignment #1 Recap I
• “… best test users will be people who are
representative of the people you expect to have
as users.”
• “If it’s hard to find really appropriate test users
you may want to do some testing with people
who represent some approximation to what you
really want…”
• What happens when we don’t use real users?
– Organizational differences; differences in domain
expertise (i.e., language); among others.
28
Reading Assignment #1 Recap II
• “… test tasks should reflect what you think real
tasks are going to be like.”
• Why is this?
29
Reading Assignment #1 Recap III
• “Mockups blur into prototypes, with the
distinction that a mockup is rougher and cheaper
and a prototype is more finished and expensive.”
• In this class, the term ‘prototype’ covers
mockups.
• How to prototype systems that interact “too
closely” with the user, such as drawing software?
30
Reading Assignment #1 Recap III
• “Mockups blur into prototypes, with the
distinction that a mockup is rougher and cheaper
and a prototype is more finished and expensive.”
• In this class, the term ‘prototype’ covers
mockups.
• How to prototype syst …
Purchase answer to see full
attachment

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code ESSAYSHELP