Psychology for UX: Study Guide

Bringing psychology and technology together is at the heart of UX design because UX is people. However, you do not need a degree in psychology to understand the basics of how humans function. Most psychological principles that are relevant to UX are easy to understand but make a big difference when applied correctly. Since the beginning, NN/g has always preached that the best designs are built for people as they really are — not who we wish they were.

Don Norman (one of our principals) calls himself a cognitive designer because regardless of the type of products you are working on, what matters is that you design systems for how people think. The following resources will help you explore and understand many of the psychological principles that help create the best user experiences and achieve an organization’s goals.

Resources in this study guide are grouped under the following topics:

Attention

Although most people feel like they notice everything going on around them, their ability to do so is very limited. Humans cannot focus their attention on everything at once — their brains automatically filter out anything that doesn’t seem useful.

Items that are close together are perceived as being related.

Memory

Human memory is limited and imperfect. The limits of human memory affect people’s ability to process information and shape the way information is stored for long periods.

Sensemaking

People are not like cameras. They do not objectively capture information and process it the same way as anyone else would. People constantly try to make sense of the world by relying on their own experiences and understandings. However, sometimes these perceptions are accurate and sometimes they are not.

Decision Making and Choice

Having more options does not always lead to greater satisfaction. Making choices (especially complex ones) is difficult and requires significant mental effort. Guiding users through decisions by making things simple will improve their experience in every context.

Motor Processes and Interaction

Interactions between humans and technology are inherently limited by human abilities and their willingness to act. To create the best user experiences, systems need to adapt to people, not people to systems.

Motivation

UX designers must create usable designs, but they must also create designs that people are motivated to use. However, leveraging what we know about human motivation in ways that harm people is both unethical and harmful for a business.

Cognitive Biases

Patterns that describe systematic ways in which people deviate from rational thinking are often called biases or heuristics. These biases are mental shortcuts people use to save themselves from doing extra mental work when making sense of the world.

Persuasion and Influence

Although they may not realize it, many people are not firmly decided on a course of action until they take it. Psychology describes how people give weight to certain types of information as they choose courses of action and the factors that can nudge their decisions.

Trust is foundational to all relationships — including relationships between users and websites. It is important for designs to establish credibility and win users’ trust to develop a long-term relationship.

Emotion and Delight

Don Norman said, “without emotions, your decision-making ability would be impaired.” Emotions play a critical role in daily functioning and determine which experiences will delight people.

Attitudes toward Technology

The way people use technology affects their lives. Designers must take care to impact people in positive ways through the designs they create.

Additional Paid Resources

Full-day courses:

Books:

from NN/g latest articles and announcements https://www.nngroup.com/articles/psychology-study-guide/

Decoding the Art of Color Palettes for Scalable Design Systems

Extended color palettes

Since user interfaces have numerous components with multiple layers and states, defining just one color for each category mentioned above isn’t enough. Additionally, this can lead to issues with accessibility and creative restraints. We solve this by using the colors defined above as the base for our extended palettes. Try to get around 10 different shades of the base color.

When it comes to building out an extended color palette, I’ve seen multiple ways to go about it. Below, I describe the two most popular ones:

Color palette generators

Here, you use a color palette generator that does some math in the background and generates an extended color palette based on the base color defined by you. This method is pretty straightforward and highly recommended if you’re a beginner or crunched for time.

While there are multiple tools for this on the internet, I prefer using a neat little Figma plugin named Foundations: Color Generator. All you need to do is choose a color profile (I prefer Material) and define your base color, and it will generate an extended color palette. The real reason I prefer this plugin is for the additional options it provides; design tokens, color palette snippets with color contrast ratios, and the ability to add the entire palette to your Figma styles, all with one click, or maybe a few.

The plugin also neatly mentions the color contrast ratios. Since we’d most likely use the 500 shade as our base, ensure that the color contrast ratio against white is 4.5:1 or greater..

ColorBox by Kevyn Arnott

If the method above seems too simple for your taste or you just want complete control over your palette, ColorBox is just the tool for you. The ColorBox method is relatively advanced and time-consuming. It can be difficult to achieve a harmonious transition if you don’t know what you’re doing, so choose this one carefully.

Unlike the previous step, you trade the convenience of one-click generators for granular control. You can define everything from the hue, saturation, and brightness to the easing functions applied to them.

You can learn more about the tool here.

from Design Systems on Medium https://blog.kamathrohan.com/decoding-the-art-of-color-palettes-for-scalable-design-systems-e77a3cc8d3de

3D Gaussian Splatting

3D Gaussian Splatting is a recent volume rendering method useful to capture real-life data into a 3D space and render them in real-time. The end results are similar to those from Radiance Field methods (NeRFs), but it’s quicker to set up, renders faster, and delivers the same or better quality.

Plus, it’s simpler to grasp and modify. The result of the method can be called Splats.

Create a Gaussian splatting by using a mobile app like Polycam or Luma:

That’s it! Now you can start adjusting how the splats look in your Spline scene.

Apply Cropping
When using crop areas, Spline only exports the visible splats within these areas to increase performance in your final exported scene.

If you also want to permanently apply your crop areas in-editor, you can press "Apply Cropping". This will permanently delete the invisible splats outside the cropping area, which can increase performance within the editor itself.

You can use crop areas to remove parts from your splats. This can be useful for optimization/performance but also for aesthetic reasons.

Note: Support for the splats within the Performance panel will be added soon.

Note: Mobile support is partial. This is still an experimental feature (recent technology) and mobile support is an ongoing effort.

Gaussian splats are rendered in a different way than normal objects or geometries, this means that not all of the features are directly compatible when mixing them together.
Currently, the following features are either not compatible or partially supported with Splats:

Don’t be discouraged by the limited support!
Until recently, it was almost impossible to render real-time hyperrealistic 3d representations like this on the web. The technology is evolving fast, and improvements will come over time!

Keep an eye on Spline’s updates!

from 3D Gaussian Splatting https://docs.spline.design/e17b7c105ef0433f8c5d2b39d512614e

Web Components Will Outlive Your JavaScript Framework

If you’re anything like me, when you’re starting a project, there’s a paralyzing period of indecision while you try to figure out how to build it. In the JavaScript world, that usually boils down to picking a framework. Do you go with Ol’ Reliable, a.k.a. React? Something slimmer and trendier, like Svelte or Solid? How about kicking it old school with a server-side framework and HTMX?

When I was writing my CRDT blog post seriesAn Interactive Intro to CRDTs | jakelazaroff.comCRDTs don’t have to be all academic papers and math jargon. Learn what CRDTs are and how they work through interactive visualizations and code samples.jakelazaroff.com/words/an-interactive-intro-to-crdts/, I knew I wanted to include interactive demos to illustrate the concepts. Here’s an example: a toy collaborative pixel art editor.

JavaScript is required to run this demo.

Even though I’ve written before — and still believe — that React is a good default optionNo One Ever Got Fired for Choosing React | jakelazaroff.comIf you spend a lot of time on Hacker News, it’s easy to get taken by the allure of building a project without a framework.jakelazaroff.com/words/no-one-ever-got-fired-for-choosing-react/, the constraints of a project should determine the technology decisions. In this case, I chose to use vanilla JS web components. I want to talk about why.

There was one guiding principle for this project: although they happened to be built with HTML, CSS and JS, these examples were content, not code. In other words, they’d be handled more or less the same as any image or video I would include in my blog posts. They should be portable to any place in which I can render HTML.

As of 2023, this blog is built with Astro. Before that, it was built with my own static site generatorjakelazaroff.com v5 | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/jakelazaroff-v5/. Before that, Hugojake.nyc v3 | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/jakenyc-v3/; before that, a custom CMS written in PHPblog.jakelazaroff.com | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/jakelazaroff-blog/; before that, TumblrHexnut v5 | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/hexnut-v5/, Movable TypeHexnut v4 | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/hexnut-v4/ and WordPressmlingojones | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/mlingojones/ — and I’m sure I’m missing some in between. I really like Astro, but it’s reasonable to assume that this website won’t run on it forever.

One thing that has made these migrations easier in recent years is keeping all my content in plain text files written in Markdown. Rather than dealing with the invariably convoluted process of moving my content between systems — exporting it from one, importing it into another, fixing any incompatibilities, maybe removing some things that I can’t find a way to port over — I drop my Markdown files into the new website and it mostly Just Works.

Most website generators have a way to include more complex markup within your content, and Astro is no different. The MDX integration allows you to render Astro components within your Markdown files. Those components have access to all the niceties of the Astro build system: you can write HTML, CSS and JS within one file, and Astro will automagically extract and optimize everything for you. It will scope CSS selectors and compile TypeScript and let you conditionally render markup and do all sorts of other fancy stuff.

The drawback, of course, is that it all only works inside Astro. In order to switch to a different site generator, I’d have to rewrite those components. I might need to split up the HTML, CSS and JS, or configure a new build system, or find a new way to scope styles. So Astro-specific features were off limits — no matter how convenient.

But Markdown has a secret weapon: you can write HTML inside of itDaring Fireball: Markdown Syntax Documentationdaringfireball.net/projects/markdown/syntax#html! That means any fancy interactive diagrams I wanted to add would be just as portable as my the rest of my Markdown as long as I could express them as plain HTML tags.

Web componentsWeb Components – Web APIs | MDNWeb Components is a suite of different technologies allowing you to create reusable custom elements — with their functionality encapsulated away from the rest of your code — and utilize them in your web apps.developer.mozilla.org/en-US/docs/Web/API/Web_Components hit that nail square on the head. They’re a set of W3C standards for building reusable HTML elements. You use them by writing a class for a custom element, registering a tag name and using it in your markup. Here’s how I embedded that pixel art editor before:

<pixelart-demo></pixelart-demo>

That’s the honest-to-goodness HTML I have in the Markdown for this post. That’s it! There’s no special setup; I don’t have to remember to put specific elements on the page before calling a function or load a bunch of extra resources.1 Of course, I do need to keep the JS files around and link to them with a <script> tag. But that goes for any media: there needs to be some way to reference it from within textual content. With web components, once the script is loaded, the tag name gets registered and works anywhere on the page — even if the markup is present before the JavaScript runs.

Web components encapsulate all their HTML, CSS and JS within a single file, with no build system necessary. Having all the code for a component in one place significantly reduces my mental overhead, and I continue to be a huge fan of single-file components for their developer experience. While web components aren’t quite as nice to write as their Astro or Svelte counterparts, they’re still super convenient.2

In case you’re not familiar with web components, here’s the code for that <pixelart-demo> component above:3

import PixelEditor from "./PixelEditor.js";

class PixelArtDemo extends HTMLElement {
  constructor() {
    super();

    this.shadow = this.attachShadow({ mode: "closed" });
    this.render();

    const resolution = Number(this.getAttribute("resolution")) || 100;
    const size = { w: resolution, h: resolution };

    const alice = new PixelEditor(this.shadow.querySelector("#alice"), size);
    const bob = new PixelEditor(this.shadow.querySelector("#bob"), size);

    alice.debug = bob.debug = this.hasAttribute("debug");
  }

  render() {
    this.shadow.innerHTML = `
<div class="wrapper">
<canvas class="canvas" id="alice"></canvas>
<canvas class="canvas" id="bob"></canvas>
<input class="color" type="color" value="#000000" />
</div>
<style>
.wrapper {
display: grid;
grid-template-columns: 1fr 1fr;
grid-template-rows: 1fr auto;
gap: 1rem;
margin: 2rem 0 3rem;
}
.canvas {
grid-row: 1;
width: 100%;
aspect-ratio: 1 / 1;
border: 0.25rem solid #eeeeee;
border-radius: 0.25rem;
cursor: crosshair;
}
.color {
grid-column: 1 / span 2;
}
</style>
`;
  }
}

customElements.define("pixelart-demo", PixelArtDemo);

Everything is nicely contained within this one file. There is that one import at the top, but it’s an ES module importJavaScript modules – JavaScript | MDNThis guide gives you all you need to get started with JavaScript module syntax.developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules — it doesn’t rely on any sort of build system. As long as I keep all the files together, the browser will sort everything out.

Another nice thing about Web components is shadow DOM, which isolates the component from the surrounding page. I think shadow DOM is often awkward when you want to share styles between your components and the rest of your app, but it’s perfect when you do truly want everything to be isolated. Just like images and videos, these components will look and act the same no matter where they’re used.

Sorry — they’re not just like images and videos. Web components can expose attributes that allow you to configure them from the outside. You can think of them as native props. Voilà:

JavaScript is required to run this demo.

Two input ranges with different accent colors. In this case, I’m just setting a CSS variable, which is one of the few things allowed into the shadow DOM:

<range-slider style="--accent: #0085F2"></range-slider>

Here’s a more complex example:

JavaScript is required to run this demo.

And here’s the markup. It uses attributes to alter the component’s behavior, setting the resolution to 20 and showing debug information on every pixel:

<pixelart-demo debug resolution="20"></pixelart-demo>

If you were wondering what those calls to getAttribute and hasAttribute were doing in the web component class, now you know. This was particularly useful when reusing the same component for different stages of a tutorial, allowing me to enable certain features as the tutorial progressed.

The other part of the equation was using vanilla JS. There are frameworks that compile to web components — most notably Lit (although I’d call it more of a library) but also Stencil, Svelte, and probably others. I’m sure they’re all wonderful tools that would have made my life easier in a lot of ways. But frameworks are dependencies, and dependencies have a bunch of tradeoffs. In this case, the tradeoff I’m most worried about is maintenance.4

That goes for TypeScript, too. By my count, the last 15 versions of TypeScript have had breaking changes — many of them new features that I was happy to have, even though I had to change my code to accommodate them. But as much as I love TypeScript, it’s not a native substrate of the web. It’s still a dependency.

There’s a cost to using dependencies. New versions are released, APIs change, and it takes time and effort to make sure your own code remains compatible with them. And the cost accumulates over time. It would be one thing if I planned to continually work on this code; it’s usually simple enough to migrate from one version of a depenency to the next. But I’m not planning to ever really touch this code again unless I absolutely need to. And if I do ever need to touch this code, I really don’t want to go through multiple years’ worth of updates all at once.

I learned that lesson the hard wayPreserving the Web | jakelazaroff.comjake.museum: an online collection of my web design and development work, from 2007 to the present day. But finding the source code was just the beginning. A lot has changed since 2007, and getting these old sites up and running again is not as simple as plopping the files on a server.jakelazaroff.com/words/preserving-the-web/ when I built my online museum, wiping the cobwebs off of code saved on laptops that hadn’t been turned on in a full decade. The more dependencies a website had, the more difficult it was to restore.

I’ve been building on the web for almost 20 years. That’s long enough to witness the birth, rise and fall of jQuery. Node.js was created, forked into io.js and merged back into Node. Backbone burst onto the scene and was quickly replaced with AngularJS, which was replaced with React, which has been around for only half that time and has still gone through like five different ways to write components.

But as the ecosystem around it swirled, the web platform itself remained remarkably stable — largely because the stewards of the standards painstakingly ensured that no new change would break existing websites.5 The original Space Jam websiteSpace Jamwww.spacejam.com/1996/ from 1996 is famously still up, and renders perfectly in modern browsers. So does the first version of the website you’re reading nowjakelazaroff.com v1 | JAKE.MUSEUMA collection of visual and hypertext media.jake.museum/jakelazaroff-v1/, made when I was a freshman in college 15 years ago. Hell, the first website ever createdThe World Wide Web projectinfo.cern.ch/hypertext/WWW/TheProject.html — built closer to the formation of the Beatles 6 than to today! — still works, in all its barebones hypertext glory.

If we want that sort of longevity, we need to avoid dependencies that we don’t control and stick to standards that we know won’t break. If we want our work to be accessible in five or ten or even 20 years, we need to use the web with no layers in between. For all its warts, the web has become the most resilient, portable, future-proof computing platform we’ve ever created — at least, if we build with that in mind.

from jakelazaroff.com https://jakelazaroff.com/words/web-components-will-outlive-your-javascript-framework/

When Will AR Shopping Be a Mass-Market Reality?



We hear a lot about AR shopping, such as 3D virtual try-ons. Among the common chorus of value propositions, it engenders buyer confidence, such as knowing that the shoe fits or the eyeliner shade is right. That can lead to higher eCommerce conversions and lower return rates.

To be more specific, AR shopping can include 3D product models in zoom & rotate carousels on e-Commerce sites (e.g., Google Swirl). It can also involve AR lenses (e.g., Snapchat), which takes things a step further by activating the camera for real-world scene placement.

On the user end, interest is growing due partly to Gen Z’s growing spending power. As they cycle into the adult consumer population, they bring camera-native tendencies with them. Along with broader cultural adoption, this generational effect could accelerate AR shopping adoption.

But those aren’t the only gating factors. There are also adoption barriers on the merchant end. These have been lowered to some degree by players like Snap which make AR experience creation easier. But that still often leaves a workflow gap: 3D product models themselves.

Will AR Marketing Reach $14.5 Billion by 2027?

Democratization Efforts

These 3D models are representations of products that are the centerpiece of virtual try-ons. They need accurate textures, colors, etc. And though any manufactured product has a CAD model floating around somewhere, they have to be compressed and optimized for mobile shopping.

This 3D-model bottleneck is starting to be alleviated by players that streamline and democratize the process. They include VNTANA in 3D model management, optimization, and deployment. On the capture/creation end, there are players like CG Trader that produce 3D product models.

Meanwhile, other democratization efforts continue to progress. For example, Apple’s Object Capture lets developers build 3D model-creation capabilities into their apps to further reduce friction in producing these assets. Among other things, this can empower smaller merchants.

In fact, Shopify continues to lean into this principle. It recently integrated the latest flavors of Object Capture in iOS 17. This lets merchants scan products to create 3D models using an iPhone Pro’s LiDAR scanner as opposed to the advanced photogrammetry equipment usually needed.

One benefit to such merchants, beyond raising their game with AR, is cost. The current standard for product displays in eCommerce is HD photography, which isn’t cheap. In fact, CG Trader has quantified how 3D model creation is cheaper and more versatile than photo shoots.

Is AR Shopping Cheaper to Produce Than Traditional eCommerce?

Common Sequence

Stepping back, though all the above presents opportunities for brands and retailers, there’s still adoption friction. Here, the lesson is the same as with past tech revolutions, such as mobile marketing: develop early competency or be ill-prepared when the tipping point comes.

This will play out through a common sequence. First, early-adopter brands will offer AR shopping. Then consumers will get a taste for it and start to get acclimated. That acclimation then evolves into expectation. And that’s the moment when AR shopping reaches that tipping point.

Brands that haven’t adopted the technology at that point are suddenly behind. And like early days of the smartphone era, this puts laggards at a competitive disadvantage. That’s followed by years of playing catchup… which is costlier than adopting the technology in the first place.

As they say, those who don’t study history are destined to repeat it. Though AR will have its own evolutionary path, its adoption and competitive dynamics will have at least some parallels to past tech cycles and emerging media formats. We’ll see who has the best institutional memory.

More from AR Insider…

from AR Insider https://arinsider.co/2023/11/01/when-will-ar-shopping-be-a-mass-market-reality/

ML System Based on Light Could Yield More Powerful, Efficient LLMs



By MIT News

August 25, 2023
Comments

Artist's rendition of a computer system based on light that could jumpstart the power of machine-learning programs like ChatGPT.

With the new system, the team reports a greater than 100-fold improvement in energy efficiency and a 25-fold improvement in compute density over state-of-the-art digital computers for machine learning.

Credit: Ella Maru Studio

A team led by researchers at the Massachusetts Institute of Technology has developed a light-based machine learning system that could surpass the system behind ChatGPT in terms of power and efficiency, while also consuming less energy.

The compact architecture is based on arrays of vertical surface-emitting lasers developed by researchers at Germany’s Technische Universitat Berlin.

The system uses hundreds of micron-scale lasers and the movement of light to perform computations.

The researchers said it could be scaled for commercial use in the near future, given its reliance on laser arrays commonly used in cellphone facial identification systems, and for data communication.

They found the system to be 100 times more energy efficient and 25 times more powerful in terms of compute density than current state-of-the-art supercomputers used to power existing machine learning models.

From MIT News
View Full Article

 

Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA


No entries found

from Communications of the ACM – Artificial Intelligence http://cacm.acm.org/news/275783-ml-system-based-on-light-could-yield-more-powerful-efficient-llms

Stakeholder management for design systems


Why should you care about stakeholders?

Stakeholders, by definition, have an interest in your project and some kind of influence to impact its success.

Some stakeholders will have a higher impact on your project than others. However, the impact is not always tied to the position or power an individual has within an organization.

For example, a developer can convince her PM that using the design system will slow down the project. If this happens multiple times, it drastically reduces your adoption rate. This has a big impact, even though the same developer may be low in your company’s org chart.

Stakeholders can influence your project in different ways. For example, by providing or withdrawing resources like budget or people. Or by cooperating with your team or blocking all cooperation. Advocating for your design system is another way in which a stakeholder can support your success.

A design system changes the way people do their work. But most people are afraid of change. This poses a problem as a design system must be accepted and used to be successful.

As a consequence, a huge part of creating a successful design system is working with the…

from Design Systems on Medium https://uxdesign.cc/stakeholder-management-for-design-systems-3841edfdb136