How People See – Peripheral Vision, Pattern Recognition, and Facial Recognition (Part 2)

Our vision trumps all our other senses. Half of the brain’s resources are dedicated to seeing and interpreting what we see. What our eyes physically perceive is only one part of the story. The images coming in to our brains are changed and interpreted. It’s really our brains that are “seeing.”

You think that as you’re walking around looking at the world, your eyes are sending information to your brain, which processes it and gives you a realistic experience of “what’s out there.” But the truth is that what your brain comes up with isn’t exactly what your eyes are seeing. Your brain is constantly interpreting everything you see.

In our second part of this article, we discuss about Peripheral Vision, Pattern Recognition, and Facial Recognition.

Peripheral Vision


We can’t help but notice movements that happen outside our gaze. For example, if you’re reading text on a computer screen, and there’s some animation or something blinking off to the side, you can’t help but look at it. This can be quite annoying if you’re trying to concentrate on reading the text in front of you. This is how our vision affects how we think, which is also how advertisers use the periphery of websites to place advertisements. Even though we may find it annoying, it does serve it’s purpose – to get our attention.

You have two types of vision: central and peripheral. Central vision is what you use to look at things directly and to see details. Peripheral vision encompasses the rest of the visual field—areas that are visible, but that you’re not looking at directly. Being able to see things out of the corner of your eye is certainly useful, but new research from Kansas State University shows that peripheral vision is more important in understanding the world around us than most people realize. It seems that we get information on what type of scene we’re looking at from our peripheral vision.


Watch this video while staring at the cross in the center, and you’ll see bulbous foreheads, gigantic noses and terrifying eyes. Watch it again while looking at the faces, and you’ll see they’re perfectly normal pictures of the same Hollywood actors (, 2012)

his is part of a research project by Matthew Thompson which investigates how the brain compares and exaggerates the differences between two faces when they’re briefly flashed up on screen alongside each other. Writing on his website researcher Matthew Thompson explains:

“Like many interesting scientific discoveries, this one was an accident. Sean Murphy, an undergraduate student, was working alone in the lab on a set of faces for one of his experiments. He aligned a set of faces at the eyes and started to skim through them. After a few seconds, he noticed that some of the faces began to appear highly deformed and grotesque. He looked at the especially ugly faces individually, but each of them appeared normal or even attractive. We called it the “Flashed Face Distortion Effect”…

“The effect seems to depend on processing each face in light of the others. By aligning the faces at the eyes and presenting them quickly, it becomes much easier to compare them, so the differences between the faces are more extreme. If someone has a large jaw, it looks almost ogre-like. If they have an especially large forehead, then it looks particularly bulbous.”

Identifying Objects by Recognizing Patterns

Recognizing patterns helps you make quick sense of the sensory input that comes to you every second. Your eyes and brain want to create patterns, even if there are no real patterns there. The figure below suggests that there are four sets of two dots rather than eight individual dots. People interpret the white space, or lack of it, as a pattern.


Pattern recognition (or pattern matching) is the way we process everything we see, from people’s faces to the written word. When visual stimuli enters our eyes, it immediately starts a chain reaction in the brain. We subconsciously hunt for anything similar to the current stimuli that we have experienced in the past.

Pattern matching also influences how familiar something feels. The more often you see something, the more patterns you have stored and easier related patterns are to identify. When patterns are easily matched, they feel familiar or “normal.” Unmatchable or difficult to match stimuli feels foreign and can even be unsettling. While drastically different these new visuals are actually more memorable, at least at first. Like anything else, repeated exposure dulls the “shock” value of the stimuli eventually making it feel ordinary as well.

Facial Recognition


One of the most active forms of pattern matching that occurs is facial recognition. There is no pattern that we recognize that has the impact of the human face. Human interactions are just as likely bad as good. As a result we evolved with a semi-conscious ability to read faces. People are instinctively drawn to the human face for two reasons: First to identify another human; Secondly, to read the person’s facial expression to determine if they are friend or foe.

Research by Catherine Mondloch et al. (1999) shows that newborns less than an hour-old prefer looking at something that has facial features.

There are three essential phases in recognizing human faces:

  • The first phase consists on coding of the physical characteristics of the person we are watching.
  • The second phase is based on encoding the identity of the person, which give us a signal that this is a person we know.
  • Then, in the final third phase, we recognize the person, but we do not know whether we know the name of this person,so we associate the face with the name of a person.

Related to web design, the use of faces can draw attention or set a mood. People will naturally identify with images of people over objects, landscapes, or abstractions. Furthermore the expression of the depicted person will influence how the user feels about the website. The more authentic the photo, the more effective it will be. This means drop the stock photography and be very intentional about what photos you take. Users will pick up on the emotions of those depicted, so avoid photos of people looking uncomfortable at all costs. This requires effort as the average person feels self-conscious when being photographed.

The “How People See” is a 4-part article. The next part will discuss more on Scanning Based on Past Experience and Expectations, and Selective Disregard & Change Blindness. Visit us again for the next part.

You can also read our first part here, How People See – Brain Shortcuts.

Share This:

Leave a Reply

Your email address will not be published. Required fields are marked *