Human Computer Interaction
- Arun B
- May 2, 2020
- 14 min read
Human-computer interaction (HCI) is the study of how people design, implement, and use interactive computer systems and how computers affect our life and our surroundings.

HC means not only the use of computers but how we interact with these machines, access key information, use them to communicate with the outside world, perform day-to-day tasks etc. It involves input and output devices such as keyboard, mouse, finger(s), voice command, switches and even use of other body parts etc., interaction techniques that use them, how information is requested and presented to the user, how machine actions are controlled and monitored, getting help, the tools and techniques used to design, build, test and evaluate user interfaces and user experiences with the machines and the processes that developers and designers follow when they create those interfaces and experiences.
HCI is an interdisciplinary area. It is emerging as a specialty concern within several disciplines, each with different emphases: computer science (application design and engineering of human interfaces), psychology (the application of theories of cognitive processes and the empirical analysis of user behaviour and user’s emotional bond with machines), sociology (interactions between technology, work, and organization), industrial design (interactive products) etc. Although the above-mentioned disciplines solve different sections of HCI challenges, but everything is geared towards solving one bigger challenge; which is, how our interaction experience with the machines can be better and easier. And as machines are becoming more complex and powerful and penetrating deep into our day-to-day life, it is becoming increasingly important to make human machine experiences more pleasing so that it becomes our natural way of adapting them and use them in easier, quicker and happier way.

Research in Human-Computer Interaction (HCI) has been spectacularly successful, and has fundamentally changed computing. One widely popular example (yet not the best) is the graphical interface used by Microsoft Windows 95, which is based on the Macintosh, which is based on work at Xerox PARC, which in turn is based on early research at the Stanford Research Laboratory (now SRI) and at the Massachusetts Institute of Technology. Another extraordinary example is the Apple iPhone; that has changed the face of Mobile phone forever and has fuelled an entirely new discipline of HCI or more precisely Human Computer Experience. It will be good to mention in this context that Smartphones are nothing but tiny pocket sized powerful Computers.
The Human Computer Interaction (HCI) program is playing a leading role in the creation of exciting new user interface software, build immersive human machine experience and technology, by supporting the broad spectrum of fundamental research that will ultimately transform the human-computer interaction experience so the computer is no longer a distracting focus of attention but rather an invisible tool that empowers the individual user and facilitates natural and productive human-human collaboration and communication.
A computer system (which also include smartphones, tablets or any form of embedded computational devices such as aircraft cockpit, household appliances, vehicles and virtually any machines that processes information) comprises various elements, each of which affects the user of the system.
Input devices for interactive use, allowing text entry, drawing and selection from the screen:
Text entry: traditional keyboard, phone text entry, speech and handwriting
Pointing: principally the mouse, but also touch screen/pad, stylus, and others
3D interaction devices
Voice command etc.
Output devices for interactive use:
Different types of screen mostly using some form of bitmap display
Large displays and situated displays for shared and public use
Digital paper may be usable in the near future
Speakers etc.
Memory:
Short-term memory: RAM
Long-term memory: magnetic and optical disks
Access methods as they may limit or help the user
Processing:
The directly affects the power or speed of processing the information or a command
Limitations on processing speed
Networks and their impact on system performance
Humans are limited in their capacity to process information and perform certain tasks. This has important implications for design. Information is received and responses given via a number of input and output channels:
Visual channel
Auditory channel
Haptic channel
Movement
Then information is stored in our memory. There are primarily three types of memories:
Sensory memory
Short-term (working) memory
Long-term memory
We humans process and apply information from stored or real time memory for:
Reasoning
Problem solving
Skill acquisition
Error etc.
Emotion influences human capabilities and machine interactions. Users share common capabilities but are individuals with differences, which should not be ignored.
The communication between the user (Human) and the system (Computer) is/are called interaction(s). Human and Computer interaction framework has four parts:
User
Input
System
Output
Interaction models help us to understand what is going on in the interaction between user and system. They address the translations between what the user wants and what the system does. Ergonomics looks at the physical characteristics of the interaction and how these influence its effectiveness. The dialog between user and system is influenced by the style of the interface. And we apply these to design the effective HCI interface.
Human-computer interaction is concerned with the joint performance of tasks by humans and machines; the structure of communication between human and machine; human capabilities to use machines (including the learnability of interfaces); algorithms and programming of the interface itself; engineering concerns that arise in designing and building interfaces; the process of specification, design, and implementation of interfaces; and design trade-offs. Human-computer interaction thus a balanced mixes of science, engineering, and design.
The goals of HCI are to produce usable, visually pleasing and safe, as well as functional systems. In order to produce computer system with good usability, designers and developers must attempt to:
Understand the factors that determines how people use technology
Develop tools and technique to enable building suitable system (which includes hardware and software)
Achieve efficient, effective and safe interaction
Put people first
Cognition is the processing of information from the world around us. It includes perception, attention, pattern matching, memory, language processing, decision-making, and problem solving. Cognitive load is the amount of mental resources needed to perform a given task.
All user interfaces make cognitive demands on users. Users must master special rules of system use, learn new concepts, and retain information in short-term memory. They must create and refine a mental model of how the system works and how they should use it. Systems that use purely auditory interfaces further challenge human memory and attention because they present information serially and non-persistently. Cognition is also the process by which we gain knowledge. The processes, which contribute to cognition, include:
Understanding
Remembering
Reasoning
Attending
Being aware
Acquiring skills
Creating new ideas
Successful user interface designs must respect the limitations of human cognitive processing. If an interface requires the user to hold too many items in short-term memory or to learn a complex set of commands too quickly, it will fail.
There are three cognitive challenges you should consider as your design progresses:
Conceptual complexity: How complex are the new concepts callers must learn? How well do new mental structures match concepts and procedures that users are already familiar with?
Memory load: How much information must callers hold in their short-term memory? How much new material (e.g., commands, procedures) must they learn?
Attention: Is it easy for the caller to attend to the most salient information? Will callers' attention be divided? If they are momentarily distracted (e.g., while driving), can they seamlessly continue their interaction with the system when they are ready?
A key aim of HCI is to understand how humans interact with computers, and to represent how knowledge is passed between the two. Interaction design is about creating interventions in often complex situations using technology of many kinds including computer software, the web and physical devices.Interaction Design involves:
Achieving goals within constraints and trade-off between these
Understanding the raw materials: computer and human
limitations of humans and of design
The design process has several stages and is iterative and never complete.
Interaction starts with getting to know the users and their context:
Finding out who they are and what they are like ...
Talking to them, watching them
Scenarios are rich design stories, which can be used and reused throughout design:
They help us see what users will want to do
They give a step-by-step walkthrough of users' interactions: including what they see, do and are thinking
Users need to find their way around a system; this involves:
Helping users know where they are, where they have been and what they can do next
Creating overall structures that are easy to understand and fit the users' needs
Designing comprehensible screens and control panels
Complexity of design means we don't get it right first time:
So we need iteration and prototypes to try out and evaluate
But iteration can get trapped in local maxima, designs that have no simple improvements, but are not good
Theory and models can help give good start points
Usability is a measure of the effectiveness, efficiency and satisfaction with which specified user can achieve specified goals in a particular environment. It asks following:
Is it effective to use
Is it efficient to use
Is it safe to use
Has good utility
Is it easy to learn
Is it easy to remember how to use
Any User Centred Development must take usability as it’s core. If the design is not usable, it will never be a success. Using usability fact finding techniques the experience and interface designer get the answer to the following questions and based on the answers, the design process begins. The design process usually follows these steps:
Data Collection; surveys, user questionnaires, Statistical Analysis
Data Analysis; analyze the tasks that user has to perform to accomplish their goals, environment analysis that where this product will work
User Modeling; a computational model for how people perform tasks and solve problems based on psychological principles
Design; in interface design, it shows how this product present itself. And in interaction design, it tells how should this product works. More about the interaction design is discussed below
Prototyping; while creating interaction design, main concern here is with usability. Rapid prototyping can help solving this challenge
Evaluation

Interaction can be defined as a dialogue between the computer and the user. The common styles of interactions are:
Command line interface
Menus
Natural language (audio or keyboard inputs)
Query and response
Dialogue
Forms and spreadsheets
WIMP (Windows, Icons, Menus and Pointers)
Interaction designers must keep users and their requirements & limitations in mind while thinking of the interaction design. Not all types of users are the same.
Users are different by tasks, cognitive and perceptual abilities, personality differences, cultural differences, disabilities, age etc. Usage environments are also different, such as Physical work environments, Hardware and software platforms etc.
A good interaction design should also allow and encourage the user to play with and explore the system. Any action of the user should lead to an immediate reaction of the system (ideally after not later than 0.1 seconds). There should also be an UNDO function available when user takes an action. The UNDO function allows for the last user action(s) to be "undone". The presence of an UNDO functions encourages users to explore the functionality of a system.
One of the key channels of human interaction with machines is through the User Interfaces. Interface design deals with the process of developing a method for two (or more) modules in a system to connect and communicate. These modules can apply to hardware, software or the interface between a user and a machine.
Good interface design therefore is important for any kind of interactive software, and of utmost importance in systems with high costs of failure (e.g., nuclear power plants, space mission control), systems with high demands on operators (e.g., rescue coordination centres, combat aircraft, call centres), systems that requires uses while on the move (e.g., smartphones, tablets etc.)
Bad interfaces may cause users to need more time for performing their tasks, make more errors, feel dissatisfied, need more time for learning how to use the software, not learn/use the full functionality of the software, (if given a choice:) refrain from using the software.
So it is important to evaluate and measure the quality and effectiveness of the user interface. And while doing so, following items should be taken into consideration.
How long does it take to carry out some tasks?
Error/success rate
How many and what kind of errors do people make in carrying out the these tasks?
How many tasks were successfully completed?
Time to learn
How long does it take for users to learn what actions are required to perform their tasks?
Retention over time
How well do users maintain their knowledge and skills over given periods of time?
Subjective satisfaction
How much did users like using various aspects of the system?
Gestalt laws describe regularities of human perception on observation or seeing things around.
Proximity: objects that are close to each other tend to be seen as a group.
Similarity: objects of the same shape or color are seen as belonging together.
Closure: Missing parts of an object are filled in to complete it, so that it appears as a whole.
Continuity: lines tend to be seen as continuous, even if they are interrupted.
Symmetry: regions bounded by symmetrical borders tend to be perceived as coherent figures.
While designing menus, screen elements or screen layouts, the above rules can be applied to provide important visual cues to the users. For example, menu items that do the similar tasks can be grouped and kept next to each other etc.
If users can quickly acquire a good functional model of the system, then it can win their heart and they will love to use it. Using the help of Mental models (both functional and structural) we can see how people’s thought process and actions are structured.
Interface metaphors evoke an initial mental model in users of the system's structure and operation. Metaphors should relate to users' past experiences and should be consistent. For example:
Typewriter metaphor: Evoked easily due to physical similarities. Should be avoided
Desktop metaphor: Currently the predominant metaphor.
Book metaphor: For hypertext, hypertext-like online documentation.
Filing cabinets: For online documentation, subdivisions in web offerings, system settings.
Or it could be composite metaphors: Combine 2 or more metaphors (like office, file cabinet and desktop)
Another important point to consider is customizable menus. Users should have the ability to make changes to the menus, to cater to their needs (= "adaptable menus"). It can be done by means of:
Introduce/change shortcut codes
Hide/delete redundant menu items
Move/duplicate menu items into other menus
It does not seem advisable to allow for completely automatic positional changes in menus, like re-ordering or hierarchical repositioning of menu items based on usage frequency (= "adaptive menus").
Menu-based Interaction also should allow users to perform 3 basic activities:
Navigation (in menu hierarchy, information resource, etc.)
Selection (of data, parameters, etc.)
Activation (of programs, documents, etc.)
Now let’s shift the focus from menu to the layout or the interface in general.
The layout should reflect the structure of the task or the task solution process, and not the structure of the underlying program. All information that is necessary to solve a coherent sub-task should be visible on the same screen.
The screen should not contain information that is never relevant for the user.
The screen layout should be more or less vertically symmetric.
All information that belongs together should be grouped together in clearly visually separated unit that is always presented at the same place.
Proximity/distance is mostly good enough for grouping; if not, use lines, differently coloured background, or boxes. Users should be able to enter and correct data in arbitrary order.
If connecting screens have to be used, the same headlines should be used information units needed in two or more screens should be presented on all screens, at the same location. Users should always be able to find information on how to get to the previous screen. Unnecessary colours and embellishments should be avoided. Emphasis should only be used if really necessary, and only with necessary prominence.
Text in upper/lower case will be read about 12% faster than text in upper case only. Serif fonts are more easily readable than sans serif fonts.
Proportional fonts are more easily readable than fixed-width fonts.
Do not use more than 1-3 different fonts and 1-3 different font sizes.
Lines should not be longer than 40 characters.
1 1/2 spaced text can be read 10% faster than single-spaced text.
Justified text has no advantages over left-aligned text. If lines are short, the reading speed of justified text is 12% shorter due to the larger spacing between words. Information units, particularly in help and error messages, should not be longer than 12-14 lines (plus possibly a figure). Use "more..." links for more detailed information. Emphasis should not be used very frequently.
Colour also plays a vital role in design. You may also like to read my previous post related to colour theory. Click here for the link.
Now in addition to what I wrote in my previous post, there are few more guidelines I would like to add here:
Do not use blue for small objects (since human sensitivity for blue is very low, particularly in the fovea)
Blue is a good background colour (since human sensitivity for blue is very low and since receptors for blue are roughly evenly distributed over the retina)
Neighbouring objects should not merely differ by their amount of blue. a a a (red, red with 50% blue, red with 100% blue)
If red and green are used for small objects, these should be in the centre (since the sensitivity for these colours is far higher in the centre).
If red and green are used as signals (warnings) in the periphery, they should have additional emphasis (like blinking or change in size).
Black, white, yellow and blue can be used in the periphery since the sensitivity of the retina is roughly the same.
The rapid spread of touchscreen smartphones, mobile devices and lately the wearable devices is fast changing the design and HCI landscape. Although the fundamentals don’t change, but certain factors must be kept in mind while creating interfaces for the Mobile or Wearable devices. You must keep in mind that they come in wide variety of sizes and shapes and often used in an ‘eye-less’ or blind mode. Small screen real estate also limits the design flexibility and choices. This article is not supposed to be focused on Mobile devices; however I am going to write another article dedicated to this field. Please look for new postings in next few weeks.
I would like to conclude design discussion by providing some guidelines on HCI design for manually impaired users. They include people with disabilities, but also many elderly users and "situationally handicapped" users. They have, e.g., problems:
Positioning the mouse, clicking and dragging. Mouse operations should also be performable using the keyboard only (e.g., use cursor keys for navigation, function key for selecting menu items)
Simultaneously pressing two or more keys (e.g., CTRL and SHIFT). Allow users to press these keys sequentially.
Entering larger amounts of data. Provide default values.
Provide next possible values when user presses a key. Successively provide next possible value; stop when user hits a key. Expand input to first possible value. Allow users to define aliases and shortcuts. For severe forms of manual impairment, special input devices are needed (head mouse , foot mouse, suction tubes, speech recognition )
Visually impaired users; could also include many elderly users. They usually have the following problems:
Problems with colour perception Solution: Use colour redundantly. Colour-code larger areas only. Colours should differ in at least two primary colours. For elderly people, allow for more brightness after extended computer work.
Problems perceiving small objects Solution: Allow for screen magnification
Blindness Solution: Special I/O devices (Braille , speech output, physical models)
Special software (e.g, "screen readers", "web-to-speech translators"). You may also visithttp://www.w3.org/WAI/#Guidelinesfor specific guidelines.
Before moving to the next set of HCI discussion, I would like to introduce the following diagram to summarize the levels of analysis in HCI.

HCI in Future
When we consider the digital world we inhabit, amount and variations of digital technology we encounter is astounding. The last few decades have seen not only an enormous growth in the number of devices but also an almost explosive diversification in the nature of these devices as they have entered every aspect of our lives. We face a future where we will need to live with an ever growing and always changing set of interconnected digital devices. Some of these will be close to us and even embedded within us, while others will be invisibly built into our surrounding environment. How these technologies are embedded in the world and the extent to which they and their interactive capabilities are noticeable to us will be equally diverse. We need to understand and design for interaction in a world where the notion of an interface is no longer easily defined, stable or fixed. Here, we consider how this will affect the boundary between computational devices: between computers and people, and between computers and the physical world.
However, no matter what happens, the characteristics that make us human should continue to be manifest in our relationship with technology.
As computer systems and programs become more sophisticated, they have also become more independent. More are beginning to make choices and decisions on our behalf. For example, popular recommender systems give guidance on what we might like to do or buy. As computers become more autonomous they also have become increasingly present in our world. ‘Clever’ computers can now clean our floors, help us find our way, and are even beginning to become our pets and companions. These developments raise fundamental questions about how we should live with them, what our relationships should be, together with larger social and ethical issues of responsibility and accountability.
What might be an appropriate kind of relationship? Rather than instructing or issuing commands, it may mean designing interactions to be more like human-human conversations. But will people be happy talking to their robots as if they were pets or even people? This question has been around for many years but will become more pressing, as clever computers become more of a reality.
We need to decide. We also need to consider the consequences of a world inhabited by independent computers that we have less control over. A sense of control over our own environment is a key human value. Will clever computer systems undermine or enhance this?
Part of this sense of control is related to how we account for our activities. We treat being responsible for what we do as a measure of sophistication and knowledge; this is why children and adolescents are not subject to criminal proceedings in the same way as adults. Such systems of accountability are not confined to matters of criminality of course but also suffuse our professional and personal actions. This, in turn, drives many broader societal relations and understandings. As computing takes on more roles in our activities and as our environment becomes constructed and controlled by computers that we might not even be aware of, these systems of etiquette, accountability and responsibility will be affected. How will we know that this is happening? Who will judge what the consequences might be?
Technology is changing, people are changing, and society is changing. All this is happening at a rapid and rather alarming rate. Specifically, it is crucial that HCI needs to extend its methods and approaches so as to focus more clearly on human values. This will require a more sensitive view about the role, function and consequences of design, just as it will force HCI to be more inventive. HCI will need to form new partnerships with other disciplines, too, and for this to happen HCI practitioners will need to be sympathetic to the tools and techniques of other trades. Finally, HCI will need to re-examine and reflect on its basic terms and concepts. Outdated notions of the ‘user’, the ‘computer’ and ‘interaction’ are hardly sufficient to encompass all that HCI will need to attend to.
Comments