Visual design creates a user experience (UX) that designates how people understand the nature and implications of created works, and in most capitalistic societies closely connected to advertising.

Good design contains a lot of psychology because users create models in their mind of how something is supposed to work, so it dovetails with most of our cognitive biases.

On a large-scale, many projects have lengthy meetings that discuss the size of a logo or the precise location a webpage opens to because absolutely inane-looking details can provoke dramatic philosophical distinctions to the user’s story.

Users travel through a predictable flow of thought as they use something:

  1. A purpose they want to accomplish, which doesn’t change throughout the process.
  2. A decision to do an action.
  3. The physical action itself.
  4. Their observation of the action.
  5. Their interpretation of the state of the thing they’re interacting with.
  6. An evaluation of that state.

As they attempt to perform an action, they perform a complex decisions-based calculus to determine whether they still want to keep doing it.

  • In general, people choose the path of least resistance in their decision-making calculus, and good design fosters the decisions they would naturally want to make.
  • If the object isn’t designed well enough, they’ll stop using it.
  • When that object has alternative objects for accomplishing the same task, they may use another object.
  • If that object is absolutely necessary to perform the task, they’ll either have to exercise willpower to complete it or will find a creative way to not perform that task at all.

Good design comes from a few traits:

  1. Uses both knowledge of the world and knowledge in the users’ minds.
  2. Simplifies the structure of tasks.
  3. Makes things readily visible.
  4. Correctly connects associations between action and purpose.
  5. Exploits the power of both natural and artificial constraints.
  6. When all else fails, creates standards to nail things down.

We tend to need both order and variety. Too much order feels boring and oppressive, while too much variety feels chaotic and unpleasant. While the definitions are very distinctive, the balance resonates within all of design.


Good design distills into a wide variety of reliable axioms. These span the entire range of psychological perception and bias, and social scientists and UX designers are constantly discovering new ones.

Anyone who follows the rules can design well, even when they’re not very creative.

Information filtering

  • Aesthetic-usability effect – we often perceive aesthetically pleasing design as being more usable
  • Anchoring bias – we tend to rely heavily on the first piece of information they see
  • Attentional bias – we filter our thoughts based on what they’re paying attention to
  • Banner blindness – we tune out what they repeatedly see
  • Centre-Stage Effect – we tend to choose the middle option in a set of items
  • Cognitive load – we require a certain amount of effort to understand things, which changes based on how the information is presented
  • Confirmation bias – we tend to look for evidence that confirms what they already think
  • Contrast – our attention draws to higher visual weights
  • Empathy gap – designers severely underestimate how much emotions influence user behaviors
  • Expectations bias – we are influenced by their own predetermined expectations
  • Fitts’ law – things are easier for us to interact with when they’re large and close
  • Framing – how information is presented determines how users decide
  • Hick’s Law – more options makes decisions harder
  • Isolation/von Restorff effect – when showing multiple items, the most different of them will stand out the most
  • Juxtaposition – elements tend to get grouped when they’re similar
  • Law of proximity – elements tend to get grouped when they’re near each other
  • Mental model – we have a preconceived opinion on how things work
  • Priming – previous perceptions will influence someone’s decision
  • Progressive disclosure – we are less overwhelmed from complex features if they’re shown later
  • Selective attention – we filter out things when they’re focused on other things
  • Spark effect – we are more likely to take actions when they’re small
  • Survivorship bias – we neglect things that don’t make it past a selection process
  • Tesler’s Law of Conservation of Complexity – everything has a certain amount of complexity you can’t remove, and making it simpler only makes it more complicated
  • Visual hierarchy – there’s a natural order of how we see things

Information presentation

  • Decoy effect – create an option that’s easy for us to discard
  • Discoverability – different elements have differing difficulty for us to discover them
  • External trigger – information on what we should do next is inside the prompt itself
  • Feedback loop – elements communicate what happens after we take action
  • Nudge – subtle hints affect our decisions
  • Occam’s razor – All things equal, a simpler thing is better
  • Provide exit points – invite the user to leave the software at the right moment
  • Shaping – incrementally reinforce actions to get closer to a target behavior
  • Signifiers – elements can communicate what they will do
  • Visual anchors – elements can guide our eyes

Finding meaning

  • Authority bias – we give weight to an authority figure’s opinion
  • Barnum-Forer effect – we believe generic descriptions of a person apply to ourselves
  • Chunking – we remember grouped information better
  • Curiosity gap – we want to find missing information
  • Familiarity bias – we prefer familiar experiences
  • Goal-Gradient effect – The closer we are to a goal, the faster we approach it
  • Group Attractiveness effect – individual items feel more attractive when presented in a group
  • Halo effect – we judge the entirety of something based on one of its traits
  • Hindsight bias – we overestimate their ability to have predicted an outcome
  • Law of common region – we perceive elements as groups if they share a clearly defined boundary
  • Law of Pr├Ągnanz – Since it takes the least amount of effort, we distill vague things ibe linked tonto a simpler and more complete form
  • Law of similarity – we tend to perceive a relationship between similar-looking elements
  • Law of uniform connectedness – we see visually connected elements as more related than elements with no connection
  • Noble edge effect – we prefer organizations that appear socially responsible
  • Parkinson’s law – we expand tasks to fill in extra time
  • Scarcity – we value things more when there’s a limited supply of them
  • Self-initiated triggers – we are more likely to interact with self-made prompts
  • Singularity – we care disproportionately more about an individual than about a group
  • Skeuomorphism – we adapt more easily to things that look like real-life objects
  • Social proof – we adapt our actions from what other people do
    • Survey bias – we tend to skew survey responses toward socially acceptable answers
  • Spotlight effect – we believe we’re being noticed more than we really are
  • Streisand effect – censoring information increases awareness of that information
  • Pseudo-set framing – tasks that are part of a group are more tempting to complete
  • Temptation bundling – hard tasks are less scary when paired with things we desire
  • Unit bias – one unit of something feels like the ideal amount
  • Variable reward – we profoundly enjoy unexpected rewards
  • Weber’s law – we adapt better to small incremental changes

Pseudo-conscious states of mind

  • Affect heuristic – our current feelings cloud and influence our judgment
  • Aha! moment – when users first realize a product’s value
  • Bandwagon Effect – we tend to believe something proportionally to how much others have believed it
  • Cashless effect – we spend more when we don’t actually see our money
  • Chronoception – our perception of time is subjective
  • Cognitive dissonance – we can hold opposing ideas in the mind at the same time
  • Curse of knowledge – we’re often unaware other people don’t possess the same knowledge
  • Dunning-Kruger effect – we tend to overestimate their skills when they don’t know much
  • False consensus effect – we tend to overestimate how much other people agree with them
  • Feedforward – we know what to expect before acting
  • Flow state – we can be fully immersed into a task
  • Internal trigger – we can be prompted to act based on a memory
  • Law of locality – we connect things that are nearby other things
  • Law of the instrument – the object we use dictates how we perceive everything else
  • Miller’s law – we typically can only keep 3-7 things in our memory at once
  • Reciprocity – we feel the need to respond when we receive something
  • Recognition over recall – we recognize things more easily than recalling them from memory

Mostly-conscious decision-making

  • Availability heuristic – we favor recent and available information over past information
  • Decision fatigue – making many decisions makes us more irrational toward more decisions
  • Endowment effect – we value something more if we feel it’s ours
  • Fresh start effect – we’re more likely to take action if we feel there’s something new to it
  • Investment loops – when we invest into something, we’re more likely to use it later
  • Jakob’s law – when we interact with something, we treat it like other things we’re familiar with
  • Hawthorne effect – we change our behavior when we know we’re being observed
  • Hyperbolic Discounting – we tend to prioritize immediate benefits over larger long-term benefits
  • Loss aversion – we’re more likely to avoid losses than earn the same gains
  • Sensory appeal – we tend to use things more often that appeal to multiple senses
  • Status quo bias – we tend to do what we’ve done before until we have reason to do otherwise
  • Sunk cost effect – we’re slow to pull out of something we’ve invested into
  • Reactance – we’re less likely to do something when we feel forced

Time management

  • Chipmunk effect – we are more likely to select a video that plays faster than normal playback speed (around 1.2x)
  • Commitment/consistency – we tend to be consistent with our previous actions
  • Default bias – we don’t tend to change established behaviors
  • Doherty threshold – we’re most productive when we’re interacting with their objects with less than 0.4 seconds’ delay between events
  • Labor illusion – we value things more after we’ve done work on it
  • Labor perception bias – we tend to imagine some things always take work
  • Planning fallacy – we tend to underestimate how much time a task will take
  • Spacing effect – we learn more when study sessions are spaced out


  • Delighters – we more easily remember pleasure that’s unexpected and playful
  • Method of loci – we remember things more when it’s associated with a location
  • Negativity bias – we tend to remember negative events more than positive ones
  • Picture superiority effect – we tend to remember pictures more than words
  • Serial position effect – we more easily remember the first and last things in a series
  • Storytelling effect – we remember stories more than facts
  • Zeigarnik effect – we remember uncompleted or interrupted tasks more than completed ones


  • Backfire effect – we tend to intensify our convictions when they’re challenged
  • IKEA effect – we value things much more when they partially create it
  • Observer-expectancy effect – researchers’ biases affect the participants of an experiment
  • Pareto principle – 80% of the results come from 20% of the causes
  • Peak-end rule – we judge an experience by its peak and how it ends, not not the total sum of our experiences
  • Postel’s law – ideal scenarios come from receiving liberally and sending carefully
  • Second-order effect – unintended second-level consequences ripple from decisions
  • Self-serving bias – we tend to take credit for positive events and blame others for negative ones

However, the rules of good design are not mechanical. Over-engineering a product can destroy it far faster than missing a few good design standards.


A tool generally has several parts:

  1. The part which actually fixes a problem (like a screwdriver tip or graphical interface).
  2. The part the user interacts with (the “interface”), which is the domain of UX.

A designer is literally communicating nonverbal information to the user through how they built the object and interface. If the designer fails through implication, they must use labels and instructions.

  • If there are any written instructions, the design has room for improvement.

If the designer doesn’t communicate their ideas well, the user is abandoned and forced to make their own conclusions.

  • Usually, they’ll conclude wrongly and misuse the product, generating frustration.
  • Great design means the user will feel as if the object is an invisible extension of their own body.

The only way a designer can successfully communicate an idea is to know what they want the user to do.

  • This should be a comparatively small set of tasks compared to all the available features.
  • The set of tasks should be given 1 at a time.
  • Each decision they must make is a task in itself.
  • The fewer the required tasks, the more seamless the experience will feel for the user.

The easiest way to figure out what the design communicates is to squint your eyes until it’s blurry, then look at the design.

  • The most obvious things will stick out more than the rest, often through color or size.

Choice overload can be a very frequent problem, and every decision will wear down the user. For that reason, unpack the scope of decisions with a rigid procedure:

  1. Start from the user’s initial needs.
    • Show how each choice’s consequences can feel.
  2. Provide easier, smaller decisions before the larger one, with as few selections as possible.
    • Break out the decisions into multiple questions and start with fewer options, then sequentially progress to more.
  3. Avoid any unnecessary friction which could make decision-making more difficult for the user.
    • Make categories as meaningful as possible for the customer.
    • When a decision is likely, pre-select for them to allow them more convenience.
    • Personalize the experience for them, but not so much that they feel their privacy was violated.
  4. Delay optional or difficult tasks.
    • Remove the least-selected items completely, especially when there’s no distinctive difference between elements.

The simpler things appear, the louder the communication becomes, so the design should only convey correct information.

  • Put interface elements nearby where the change happens.
  • Make things understandable with relevant graphics or pictures.
  • Take advantage of empty areas (white space) to give “silence” to the design.
  • At the same time, if the graphical element is vague, it’s worse than putting a text label.
  • Text labels should be easy to read even in a dark room, and all abbreviations should be commonplace.

One of the most important components of design is to give reliable, rapid feedback. The user must know their action mattered, or what they should have done instead. The best way to give feedback is with constraints, such as graying out or concealing a button, locking off something mechanically, or limiting permissible text input into a textbox.

UX doesn’t only apply to visual elements:

  • Even without computers, push-button interfaces may still have light/dial feedback (e.g., heavy machinery).
  • Even without any visual indicators, an interface can be completely audio-based (e.g., a phone number’s directory tree).
  • Many UX designers simply focus on visual elements because it’s the most familiar sense most of us use, so sound can become extremely annoying or risky when misused or mismanaged.
  • The design should incorporate what happens if the sound settings are lowered or off (e.g., <50% volume) and the possibility that the user will not see a visual input (e.g., away from screen).

Users can only draw from their environment (observations) or memory (expectations), so computer design frequently borrows from the world around us:

  • Building things to reflect biological form, such as making the case look like a plant or adding earth tones.
  • Adding context-sensitive colors, such as shades of gray for industrial tones or a brightly colored palette for children’s themes.
  • Inputs that match how we interact with nature (e.g., a scroll wheel interacts like rolling a ball on the ground, a steering wheel interacted like pulling the reins of a horse).

Most users blame themselves when using a product:

  • They treat the design as somewhat infallible or don’t imagine a human designed it.
  • They don’t realize that the designer is communicating to them.
  • When in doubt, treat the designer as a bad communicator.

Good design works with (and never fights) the user’s natural impulses. The most cathartic experience for a user is when they have an easier time using something than they’re accustomed to. If people ever feel frustration over a particular product, there’s an untapped market there.


In the absence of being able to reliably communicate, documentation is a necessary evil. However, the best documentation creates standards and protocols that clarify explicit rules on how things should be.

Make sure to record the documentation before wrapping up the project or closing everything in. After the concrete has been filled in, the software has been compiled, and the engineers have left, there should be plenty of information to indicate how everything works inside.

If there’s a computerized text code (e.g., VIN, MAC address), make sure the content is legible:

  • Avoid using similar symbols (e.g., S vs. 5).
  • Group the elements out with standardized characters (e.g., 513-424-0945, 8/
  • Make the letters very distinctive from their background (e.g., embossed, painted with thick strokes).
  • If anyone will ever read it out loud, avoid similar-sounding letters (e.g., F vs. S).
  • The complete alphabet, minus the above: A, C, D, E, 6, 1, K, L, M, N, P, R, 5, T, U, X, Y, 0, 2, 3, 4, 7, 8, 9.
  • One alternative is to use military callsigns: Alpha, Brave, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, Indigo, July, Kilo, Lima, Mike, November, October, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, X-ray, Yankee, Zulu.

Forcing Functions

Forcing functions require the user to do something before something else.

  • A lock-in prevents leaving something without doing something first (e.g., can’t remove the key until car is in park, confirmation to save before quitting).
  • A lock-out stops something without doing something else first (e.g., a lock opening after using a key, greyed-out button before making a selection).

They are extremely valuable to create user constraints, but use them sparingly because people hate them.

While indirect system interaction is fine when the work isn’t critical, the system should always allow the user to override the product’s intended design whenever needed. Otherwise, this can cause tremendous damage as the user panics.

Human Error

People make mistakes, and high-quality products usually don’t. Most products create a very logical issue that’s easy to diagnose, but human mistakes tend to group into several classes:

  1. Capture error, where a familiar and unfamiliar action is the same, so the familiar action wins out by virtue of habit.
  2. Description error, where the inner characteristics of the action are mixed up (but the form appears to be the same), which often comes by being distracted or stressed.
  3. Data-driven error, where the information in someone’s memory is mixed up.
  4. Associative activation error, where an unrelated thought guides an action.
  5. Loss of activation error, where someone forgets something and breaks a routine.
  6. Mode error, by failing to identify a state-based situation.

To accommodate this, good design incorporates reliable ways to handle human error, typically through a few avenues:

  • Keep logically related elements nearby each other.
  • Keep dangerous actions far apart from commonplace actions.
  • Make everything simpler or more straightforward, which may involve rearranging the placement of objects or moving less-used elements out of the way.
  • Communicate warnings better, which may use symbols but will likely require text. Use this sparingly because it distracts from the design of the original item.
  • Design more forcing functions to create constraints about the situation. Again, use sparingly or the user will learn to hate the product.


Each design has to go through a lot of testing to make sure it works correctly. The organization designing the object will typically provide a marketing-driven image to:

  • Using “personas” of various types of people to rapidly define what purposes that user can use the interface for.
  • Creating an “information architecture” to visually inform users of their location relative to the rest of the interface.
  • People feel supported throughout the experience, often where they can summon extra documentation or contact a person.
  • Each person’s task is an “iteration”, which they will expect to repeat with the same results each time, to create a reproducible “user journey”.

Generally, combining elements and grouping them with optical illusions creates simplicity (at least until the user has to use it with too many controls), and most expert designers tend to find creative ways to merge multiple seemingly unrelated elements.


Colors create instant judgments and feelings that bypass the conscious mind.

  • Literally seconds after someone sees a color, they’ve defined beliefs about that thing.

Warm Colors (red, orange, yellow, gold, pink):

  • Tends to feel exciting
  • Can stimulate hunger, impatience, and aggression
  • Without other colors to dilute it, can agitate or overstimulate

Cool Colors (green, blue, purple):

  • Gives a calming effect
  • Without other warm colors, can feel cold or impersonal

Neutral Colors (white, grey, silver, brown, black):

  • Great for mixing and as a design background
  • Tones down other colors’ intensity
  • Without other stimulating colors, might feel boring

In general, it’s a good idea to use a lot of neutral colors to wash out most of the experience, then focus attention with either warm or cool colors to evoke the correct feeling.

Some colors in particular provide strange, counter-intuitive associations.

  • People usually dislike yellow, but people who prefer yellow adore it.
  • The shading of blue can make something feel either highly masculine or highly feminine.
  • Black is extremely bold and polarizing.
  • People calm down after thirty minutes of looking at pink, but bright pink is visually overwhelming.


The structure should reinforce the way the user should understand what’s important and what they should “do”:

  • A visual hierarchy that makes people focus on the most important piece of content first, then move to the next most important, and so on. You should easily see that dominance if you squint or take your glasses off.
  • Make the “call to action” as clear and distinctive as possible compared to the rest of the product.
  • Elements aligned with other elements to give a sense of order and to connect related concepts.
  • Handle sharp corners carefully because they create a harsh contrast from our associations within nature and point away from the elements within those boxes.
  • If the interface can use a low-tech solution that plainly communicates an elaborate concept (e.g., indicator light, dial), it’s often superior to a more advanced element (e.g., touchscreen).

Avoid “Z-patterns” across the flow of the information where the eye moves left-to-right, then back to left again:

  • Set a line length limit of 50-60 characters and never go past 70, meaning narrow and tall information blocks.
  • Place labels above input fields, not to the side.


A user must be able to quickly gather relevant information and decide:

  • A “default” option or configuration the user will usually use.


Media (like images, video, and audio players) must be easily presented with the understanding that the user may not fully perceive it at all times, with clear fallback plans for when the media doesn’t function correctly.

All media should “prime” the user to what they should next expect.

Set simple typography and color on complex backgrounds, and complex typography and color on simple.


Beyond marketing, colors also communicate subtle information of their own:

  • Warm versus cold colors determines the mood of the experience.
  • Grey shades tend to imply an inhuman experience, but are necessary to offset a colored background.

Use contrasting colors to distinguish between different elements.

  • The contrast should demarcate its colors highly between text and background.
  • The colors for the text shouldn’t be warm unless it’s trying to draw attention compared to the rest of the content.

You don’t need a wide variety of colors. Often, 2-4 colors is all most designed objects really need, with other colors simply spinning off those colors if needed to give subtle implications.


Pay close attention to font choice. Text details like like text placement, font, heading size, and spacing will evoke many feelings:

The typography should fits the emotional association the designer wants the user to experience.

  • Typically, only give up to 2 fonts per interface, though 1 font type is often ideal.
  • The font should have a serif to make it more authoritative, though it’s much simpler if it’s sans serif.

Make sure to use a legible font size that allows anyone using the product to read it without squinting or magnification:

  • The font should typically be 20-point or more unless it’s paragraph/content text.
  • The kerning should allow each of the letters to be close enough to identify as words, but far enough that they’re not overlapping.

White text with a black outline can read on any color background.

Do not mix-and-match styles, and keep everything at least somewhat the same throughout the design.


When users are dissatisfied with a product but don’t know why, they tend to ask for unimportant or unrelated features.

Adding features comes with a paradox: more features can make something more useful, but also more complicated.

  • Eventually, all products throughout their d/evolution keep adding features to satisfy user requests, which eventually makes the thing less usable.
  • Very few controls make the product look easy to use and difficult to operate, but more controls make it feel more complicated.
  • The ideal middle ground is to give the same number of controls as the same number of functions, then organize them by use.

There are a few important ways to handle more features:

  1. Only present features and group them according to their use, which requires understanding the frequency the users want those features. Not grouping them will cause confusion.
  2. Either avoid adding unnecessary features or make them unnecessary attachments to the core design experience, since they’ll get in the way otherwise.
  3. Keep the design as simple for the user as possible. This means keeping the number of controls as the same as the number of features: too few and the design is cumbersome, too many and the design is complicated.
  4. Intuitively hide away less-frequent features. Hiding them away without communicating where they are will make the user think they don’t exist, and some of them may need it!
  5. Pay close attention to obsolete or unnecessary features as they arise. Those features need removing.

Frequently, focus groups can add features that destroy the core functionality of the product.

At Scale

Often in large organizations, a committee makes changes to the UX, and it tends to follow a lengthy procedure for it to roll out.

First, there are a variety of techniques to learn what the users’ optimal behaviors will be:

  • Focus groups will ask for opinions and feelings related to specific cues.
  • Card sorting, where people organize and categorize information in a way that makes sense to them.

There’s a lot of testing that goes into most large-scale visual designs:

  • A/B testing, through exposing alternative interfaces side-by-side.
  • Diary testing, collecting information via users writing a “digital diary” about their experience as they use the interface.
  • Gorilla/monkey testing, where the users repeatedly behave incompetently.
  • Guerilla testing, by going to public places and getting feedback.
  • System usability scale, where people are interviewed on 10 1-to-5 questions on how much they like the interface.
  • Task-based testing, by giving people instructions and observing how well they do them in the interface, as well as any hangups in their flow.

The entire experience of UX testing can be harrowing for designers and UX developers, since test subjects are only talking about what they feel versus the creators focusing on reality or on an idealized aesthetic. However, every $1 invested in good UX can often make a return of $2-100 in practice.

However, A/B testing can yield inaccurate results when implemented alone. People often enjoy novelty but don’t realize its adverse effects. Large organizations frequently make a decision after extensive A/B testing that doesn’t capture an ideal product, and they’re forced to make a difficult decision:

  1. Keep it as it is, with all its terrible elements.
  2. Rework the change, which will often confuse the users who have to adapt.
  3. Make another change to compensate, which will often confusing a different subset of users.

The best prevention is to make design decisions that, if they were wrong, won’t make a dramatic difference to results. By pure statistical reality, designers are almost guaranteed to foul up on something:

  • Give many options for the user, even if it looks ugly or unwieldy in the end.
  • Make small modifications, step-by-step, instead of large changes.
  • Always maintain convention, even if the designers hate it.
  • Keep everything as open as possible to allow people to fix and improve on it later.

Plus, design decisions are subject to the perils of miscommunication and power dynamics that come with any large group. To avoid it, projects often develop in as many phases as necessary to prevent working on a wrongly defined purpose:

  1. First, get consensus on an idea, which typically has to be realistic, but whoever approves it must believe in it.
  2. Conduct plenty of focus groups and research to find exactly what people would want or how they would use the thing.
  3. Designers/engineers will typically create wireframes and stock prototypes of the thing. These wireframes often give a visual aesthetic of what the final product will look like, but lack most details.
  4. After someone approves it, the creators will create a near-finished work without any features. This is the first time the thing actually “exists”, but it’s pretty lame by comparison to the final product.
  5. After another approval, the product is slated for final production, which will often involve testing to make sure that it can be mass-produced or shown to the public.
  6. Finally, after another round of approvals, it’s shipped out the door, with numerous tweaks and updates as things fail.
  7. Maintain a continuous design cycle that constantly harvests feedback and updates the product to get better.
  8. If the product itself has become obsolete or needs to be abandoned, use the information from it for a new design, and start at step 1 again.

To combat the extra complexity, there are a few major ways to improve interface changes:

  1. Understand exactly the kinds of people using the interface. This often requires including someone in the discussion who would normally not talk with the designers/developers, and preferably isn’t part of the organization at all.
  2. Create a “visual language” that demarcates a consistent pattern across the entire organization, often with a style guide.
  3. Put together a shared set of components the entire organization is supposed to use.
  4. Constantly communicate as things change, using one system for all discussions.
  5. Keep designers and engineers constantly discussing with each other about any changes.
  6. Constantly document as things change, and keep it integrated with the style guide.
  7. No matter what, always maintain constraints on what designers can do (e.g., time limits, space limits).

If the product is particularly complex, divide out the rate of the design’s change into “layers”:

  • Site – its location or designated place, (hours to never)
  • Skin – its exterior structure (minutes to every 20 years)
  • Structure – foundation and load-bearing capacity (weekly to rarely)
  • Services – people who must maintain it (hourly to every 7-15 years)
  • Space Plan – changes inside the element (every few months to every 10-30 years)
  • Zeitgeist – shared awareness and understanding (every few months to every 2-3 years)
  • Stuff – the inner workings and miscellaneous domains

It’s worth indicating that “slower” design elements tend to constrain the “faster” ones.

Generally, the size of large organizations and the work required tends to inhibit completely free customization. In simple commodities this isn’t a problem, but more high-end products need more freedom for the users to explore and identify with the object.

Another risk of large-scale endeavors is complete blandness. Since people can get easily offended at small distinctions, the best form that offends nobody is incredibly boring. It also pleases very few people as well.

Big Issues

Most of the time, UX developers follow safe fashions, but frequently they’ll run a fashion out to its most extreme and make the interface almost unusable, often for a few broad reasons:

  • Designers became intimately familiar with what they were designing, and have very limited interaction with the users. Often, they may only know what marketing professionals or their friends think.
  • Designers have seen the same old, tired thing. They tend to not understand that the thing exists in its current form is because it was often the best way to do it, or they’re disregarding conventions that everyone is used to.
  • When a new design trend or technology becomes popular, designers tend to abuse it.
  • Sometimes, managers will override the designers’ professional experience, testing, and common sense based on their personal preferences or interests.
  • Often, once a product has become complex enough to fulfill a specific use, its complexity forms a cult-like culture around it that elevates the object as more valuable than what it actually does. At that point, it satisfies market demand through appealing to an image of sophistication, while being awful for the user (and also opening up the opportunity for a better version elsewhere).

There are plenty of examples of bad fashions:

  • Using touchscreens instead of buttons in automotives.
  • Having internet-connected devices that don’t need to be connected to the internet.
  • Aggressively auto-connecting Bluetooth when the user doesn’t want it.
  • Giving “popular choices” that are clearly not popular.
  • Washing out the visual contrast for style reasons, but making it somewhat unreadable.

The best solution, in all respects, is to get users using the thing as soon as possible, since making large changes can become very difficult or expensive. After all, the marketing professional and even the designer can’t understand how the user interacts with it as much as the user.

Dark Patterns

One of the more sinister uses of UX is to create “dark patterns” that steer users to make decisions they otherwise wouldn’t have taken:

  • Motivating a service upgrade that someone may have not wanted to pay for.
  • Provoking the user to give information they likely wouldn’t have given.
  • Create friction against actions they don’t want the user to take (e.g., unsubscribing, deleting user profile).
  • Requiring the user to repeatedly decline a permission or service, ignoring the user’s response, or intentionally causing the app/site to crash once they select it.
  • Requiring a login to an otherwise-free service (like cat videos), even when the user doesn’t want to give a login.
  • Requiring users to speak to a human being to cancel their subscription.
  • Showing a product is out of stock, with a completely identical product with the same price, but with a clear reduction in its weight.

Most engineers often compromise when they’re instructed to design those patterns, and evil prevails when good people do nothing.


Very frequently, UX developers are motivated to create elements that draw the user in indefinitely. Even when something is free, more time with the software means the person is more likely to spend money on an upgrade, give data that can be sold, or increase the value of selling advertisement space.

The obsession with “user engagement” (i.e., addiction) makes “user experience” in many become “user exploitation”, and it’ll present itself through many subtle dark patterns:

  • Creating a feeling of urgency with relative timestamps (e.g., “3 hours ago” instead of “6:15 PM”).
  • Making an endless loop of behavior with infinite scrolling instead of a “More” button or link.
  • A fake points system with icons that reinforce the experience with interaction and feedback.
  • Tweaking the system to promote or bury certain user-made content, either through hidden algorithms or at the developer’s/company’s whim.
  • Making leaving the service frustratingly difficult, or requiring a human being to finalize the process.

More Reading

A tech-oriented dive into this content

UX in General


Fitts’ Law