A Complete List of NLP Submodalities

What you’re getting into:

XXXX words, XX minute read time

Key Points:
  • asdf

1

What is a Submodality?

In Neuro-Linguistic Programming (NLP), we have a lot of jargon. One such term is “submodality.”

Modalities are Senses

A “modality” is one of the senses. Typically we think of five[1]:

  • Seeing
  • Hearing
  • Touching
  • Smelling
  • Tasting

In NLP jargon, these are…

  • Visual
  • Auditory
  • Kinesthetic
  • Olfactory, and
  • Gustatory.

A “sub-modality” is an aspect of a modality. For example, “brightness” is a visual submodality.

Submodalities Change Meaning

How bright or dim something appears affects the meaning we give to it. The meaning then changes how we feel.

A candlelit dinner feels different than the bright, florescent lights of a dentist’s office. Imagine a romantic dinner with bright florescent lights. How unromantic! Imagine a dentist’s visit lit by candlelight. Very creepy!

A submodality changes the meaning without changing the content. It’s the same dentist’s office, but by candlelight it feels like a horror movie.

Submodalities Change Feeling

Most people aren’t even aware of this distinction. It’s totally unconscious, in the background.

Some of the only people who really understand this are television producers. People who make TV shows actively manipulate submodalities to create emotional scenes. If the same scene, with the same lines, and the same actors is done from a different angle, with different lighting, with different zoom, it’s a totally different experience.

If we understand how these work, we can not only make great TV shows, we can change our lives. Submodalities are the power behind effective NLP techniques. For instance, the Reconsolidation of Traumatic Memories (RTM) method has been proven in peer-reviewed published research to help war veterans with PTSD recover from traumatic flashbacks in a few short practice sessions[2]. This is after years of nightmares and other failed therapies.

Understanding how submodalities work can help us transform mental illness, increase motivation, change beliefs, eliminate stress, increase persuasive powers, and much more. Typically these changes happen very quickly, as quickly as switching on bright florescent lights can ruin a romantic mood.

If you learn submodalities, people will think you have superpowers. These distinctions give you incredible mental flexibility and creativity.

We Think in All Senses

We not only have external senses, but we also have internal representations or thinking. Many people think thinking is talking to yourself in your mind. That is one form of thinking, but not the only one.

We can also hear other people talking, or hear music. We can make pictures and mental movies. We can imagine a touch or what it feels like to throw a ball. We can even create immersive experiences in all five senses.

Hypnotists often do a “suggestibility test” like this: Imagine going into your kitchen. Look at your refrigerator, and see what color it is. Grab the handle and feel the hardness in your hand. Pull the door open, feeling it give way with that audible sound as the seal breaks, and a wave of cool air hits your face. The light turns on, and you can see a bright, yellow lemon.

Cut lemons

Pick up that lemon and feel the cool, hard rind in your hand. Grab a kitchen knife and something to cut the lemon on. Slice it down the middle, seeing the juices spray into the air. Bring the lemon slice to your nose and smell that strong lemon scent. And then take a big bite out of that lemon and taste that powerfully sour flavor now.

Now you are salivating. Or at least people with vivid imaginations are. You didn’t need to be trained by Pavlov to drool on command, all you had to do was use your imagination. In hypnosis this is called “the lemon test.”

If you just did this experiment as you read, you thought in all five senses. So thinking is not just auditory inner talk, it’s re-presenting the five senses in our minds. Each of these senses then has further distinctions we can make, submodalities, which change what things mean and how we feel.

The visual, auditory, and kinesthetic are the most common submodalities explored, often summarized as VAK. Visual has the most known submodalities.

We can change the actual brightness of a room, or we can change the brightness in a mental picture or movie. The latter is typically what we do in NLP. We can change how we think to change how we feel and act.

Submodalities Get Us Unstuck

Brightness is just one of many submodalities. This gives us nearly infinite possibilities to explore.

When people have problems, it feels like there are stuck. Either they have…

  • no options,
  • one bad option, or
  • a double-bind between two bad options.

If you learn submodalities, you’ll always have many more than two options. Any situation has an infinite possible number of meanings, based on how we represent it. This gives you incredible freedom and creativity.

If you learn submodalities, you will never be stuck again.

2

Listing All NLP Submodalities

If you look up lists of NLP submodalities, no two are the exact same. Some have more than others. Some lack submodalities that others list.

As far as I’m aware, human beings have not yet made a comprehensive list of all submodalities. This means we have an opportunity to explore new frontiers in human psychology.

If we haven’t even listed them, we certainly haven’t effectively understood how to use them. To do that, we first need a complete list. Then we can explore each submodality in depth: discovering how to change it, how doing so affects meaning, and how meaning changes feeling and action.

In Richard Bandler’s book Using Your Brain–for a Change, he lists quite a few submodalities. Then he says to experiment with each in your mind. But he doesn’t specify how to do these experiments. It could take a few hundred hours just to explore the basic ones he lists in his book. After we make a comprehensive list, we also need practices, guided recordings, classes, and so on to explore these distinctions.

After exploring single submodalities, we could explore combinations, two or three at a time. The RTM method utilizes six submodalities combined together. There are submodalities on this list that I’ve never seen used in any technique. Can we solve problems that are currently unsolvable with these underutilized ways of constructing reality? Until we list and explore them, we’ll never know.

Techniques like RTM work for untrained people. But what if we train in them? Learning submodalities not only allows us to create effective change methods for others, but also develops mental skill. Roberto Assagioli, founder of Psychosynthesis, encouraged people to develop their mental skills by imagining things like the lemon test, in all 5 senses.

By developing such inner skill, you can do incredible things with your mind that affect your thinking, emotions, and even automatic functions of the body. Through visualization and breathing practices, Tibetan Buddhist monks practicing “tummo” can increase their body temperature so high that they can sit comfortably in the freezing mountain air, covered in wet cloth, with steam pouring off of them. What other things are possible for humans with additional mental tools?

Human genetic code hasn’t changed much in the past 100,000 years. Whatever we discover know will work for another 100,000 years, barring any radical genetic engineering. So this is an extremely important project.

I don’t claim the lists below are 100% complete. In fact, if you notice something I’ve missed, please contact me. This is an ongoing exploration, the goal of which is to advance human understanding. I would love to collaborate with others in this exciting project.

I didn’t make most of these distinctions. I am standing on the shoulders of giants, mostly Steve and Connirae Andreas, but also Richard Bandler, and many other contributors to the field of NLP. I took NLP training with NLP Comprehensive. Steve and Connirae originally wrote the manual they use, and the submodality module had a very complete list. But just by thinking about it a little, I came up with many more. So I believe my list is the most comprehensive that exists, or at least that I’ve seen so far. I’d be happy to be proven wrong if someone has a better one.

As I personally explore each submodality, I will create pages linked to the name of that submodality that explore even more detail. For instance, I’ll explain…

  • Several variations on that submodality,
  • Multiple ways to change it,
  • Overlaps with other submodalities,
  • Meanings associated with that submodality,
  • Idioms in English which reference it, and
  • NLP methods that utilize it.

Visual Submodalities

SubmodalityDescription
NumberThe number of images
Movement (pic vs. movie)Still picture or movie
Movement (image)Movement of image itself
Movement (object/person)Movement of object/person in the image, or part of an object/person
Movement (part)Movement of part of an object/person (e.g. a hand waving)
Movement (perceiver)Movement of perceiver / camera angle, e.g. panning
Location (perceiver vs.)Location relative to you of image, object, person
Location (objects vs.)Location of images/objects/people relative to each other
Distance (perceiver)Distance of images/objects/people relative to you
Distance (objects vs.)Distance of images/objects/people relative to each other
DirectionWhen something is moving, what direction is it headed?
Time directionMovie plays forward in time or backwards in time
Speed (movie)Slow motion, real time, faster than real time
Speed (image)When an image itself moves around, how fast it goes
Speed (object/person)When an object/person moves, how fast it goes
Speed (part)Speed of part of an object/person (e.g. waving hand)
Acceleration (any)When something has increasing speed, how fast does it accelerate
Duration (movie)How long does the movie play for? For instance, 10 seconds to play.
Scope in timeHow much real time does the movie cover? When does it start and end?
Size (image)Size of image, including things like height, width, thickness, etc.
Size (object/person)Size of object/person in image
Size (part of object/person)Size of part of object/person in image (e.g. doorknob, eyes)
Size (perceiver)Size of the perceiving self
ShapeShape of image/symbol
Perceptual PositionWhere you are viewing from (generally: Self, Other, Observer)
Color (object)Color of object
Color vs. B&WWhether the image is in color or black and white
Color vs. B&W (part)If part of an image or an object is in color or black and white
Brightness (image)Brightness or dimness of whole image
Brightness (part)Brightness or dimness of part of image
Hue / Color BalanceReddish, greenish, blueish, etc.
LightingNumber, location, direction, and intensity of light sources, including spotlights
SharpnessHow sharp or dull
ContrastHow high or low contrast
Focus / Blur (image)How focused or blurry is the whole image
Focus / Blur (part)Are there areas that are blurry or in focus
Transparency (image)Transparency of whole image, 0% = opaque, 100% = nonexistent
Transparency (object/person)Object/person has transparency (along 0-100% scale)
Transparency (part)Part of object/person has transparency
Orientation / TiltImage tilt on any/all of 3 axes in space (but still, as opposed to rotation)
Rotation (image)Image is rotating. Axis? Direction? Speed? Acceleration? Changing direction/speed/acceleration?
Rotation (object/person)Object/person is rotating. Axis? Direction? Speed? Acceleration? Changing direction/speed/acceleration?
Rotation (part)Part of object/person is rotating. Axis? Direction? Speed? Acceleration? Changing direction/speed/acceleration?
Rotation (perceiver)Perceiver is rotating. Axis? Direction? Speed? Acceleration? Changing direction/speed/acceleration?
3D vs. flatDoes the image have depth (also affects how things move in distance vs. close up aka parallax)
Framed vs. PanoramicIs the image framed (if so what color, shape, size, thickness, etc.)
Simultaneous vs. SequentialMultiple movies or images: one after the other or at the same time?
Zoom / MagnificationZoom in or out
Aspect ratioHeight:width of image
Image quality / PixelsUltra HD vivid clarity, to pixelated, 240p, fuzzy
Flicker / StrobeNo flicker = smooth or still image, flicker = missing frames or light going on and off, also flicker rate
Seeing LocationSeeing directly out of own eyes or even slightly off?
WordsLetter spacing, line spacing, typeface, left to right or up to down, in addition to size, color, etc.
SparkleA combo of location, color, number, brightness, flicker/strobe, etc.
Bulge / Cave (image)Entire image bulges or caves in
Bulge / Cave (part)Part of image, object, or person bulges or caves in
ShadowRelated to lighting but not always, such as in a drop shadow on text. Can be crisp or diffuse, etc.

As you can see, there are an enormous number of visual submodalities to explore. This is even before combining them with each other visual submodalities, or with auditory or kinesthetic submodalities. The possibilities are vast.

This isn’t the only way to group these. For instance I’ve broken out whole and part, as in “Focus/blur (image)” vs. “Focus/blur (part).” It might be more sensible to group these together, then break them out only when exploring them. That would make the number of submodalities more manageable.

May all beings be happy and free from suffering.

Duff McDuffee
Hypnotist

(Photos courtesy of Unsplash.com.)

Footnotes

1. There are actually more than five senses, and I don’t mean psychic abilities. Touch should be “somatosensory.” Additional divisions of “kinesthetic” include things like…

  • equilibrioception (balance, or the vestibular system),
  • proprioception (knowing where parts of your body are without looking),
  • interoception (feeling inside the body),
  • thermoception (sensing hot and cold),
  • nocioception (the ability to feel pain),
  • hunger, and
  • thirst.

Non-human animals have additional senses we humans in 2020 do not such as…

  • electroception (noticing electrical fields),
  • magnetoreception (noticing the magnetic field of the earth),
  • echolocation (which some blind humans have learned),
  • infrared sensing, and
  • surface wave detection (as in fish).

There may even be more than this, or other models that divide the senses up differently. Remember: the map is not the territory. We normally describe five senses, but that isn’t comprehensive, nor the only way to describe it.

Many of our technologies give us senses we don’t have, by translating data from things we can’t sense (e.g. radio waves) into things we can.

2. See the Research and Recognition Project, especially their Research Overview.

  • Published MONTH DAY, YEAR
  • Last Updated on November 25, 2020