All JavaScript and TypeScript features of the last 3 years

This article goes through almost all of the changes of the last 3 years (and some from earlier) in JavaScript / ECMAScript and TypeScript.

from Sidebar https://medium.com/@LinusSchlumberger/all-javascript-and-typescript-features-of-the-last-3-years-629c57e73e42

The Market for Lemons


For most of the past decade, I have spent a considerable fraction of my professional life consulting with teams building on the web.

It is not going well.

Not only are new services being built to a self-defeatingly low UX and performance standard, existing experiences are pervasively re-developed on unspeakably slow, JS-taxed stacks. At a business level, this is a disaster, raising the question: “why are new teams buying into stacks that have failed so often before?”

In other words, “why is this market so inefficient?”

George Akerlof’s most famous paper introduced economists to the idea that information asymmetries distort markets and reduce the quality of goods because sellers with more information can pass off low-quality items as more valuable than informed buyers appraise them to be. (PDF, summary)

Customers that can’t assess the quality of products pay the wrong amount for them, creating a disincentive for high-quality products to emerge and working against their success when they do. For many years, this effect has dominated the frontend technology market. Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.

These technologies were initially pitched on the back of “better user experiences”, but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.

The complexity merchants knew their environments weren’t typical, but they sold highly specialised tools as though they were generally appropriate. They understood that most websites lack tight latency budgeting, dedicated performance teams, hawkish management reviews, ship gates to prevent regressions, and end-to-end measurements of critical user journeys. They understood the only way to scale JS-driven frontends are massive investments in controlling complexity, but warned none of their customers.

They also knew that their choices were hard to replicate. Few can afford to build and maintain 3+ versions of the same web app (“desktop”, “mobile”, and “lite”), and vanishingly few scaled sites feature long sessions and login-gated content.

Armed with all of this background and knowledge, they kept the caveats to themselves.

What Did They Know And When Did They Know It? #

This information asymmetry persists; the worst actors still haven’t levelled with their communities about what it takes to operate complex JS stacks at scale. They did not signpost the delicate balance of engineering constraints that allowed their products to adopt this new, slow, and complicated tech. Why? For the same reason used car dealers don’t talk up average monthly repair costs.

The market for lemons depends on customers having less information than those selling shoddy products. Some who hyped these stacks early on were earnestly ignorant, which is forgivable when recognition of error leads to changes in behaviour. But that’s not what the most popular frameworks of the last decade did.

As time passed, and the results continued to underwhelm, an initial lack of clarity was revealed to be intentional omission. These omissions have been material to both users and developers. Extensive evidence of these failures was provided directly to their marketeers, often by me. At some point (certainly by 2017) the omissions veered into intentional prevarication.

Faced with the dawning realisation that this tech mostly made things worse, not better, the JS-industrial-complex pulled an Exxon.

They could have copped to an honest error, admitted that these technologies require vast infrastructure operate; that they are unscalable in the hands of all but the most sophisticated teams. They did the opposite, doubling down, breathlessly announcing vapourware year after year to forestall critical thinking about fundamental design flaws. They also worked behind the scenes to marginalise those who pointed out the disturbing results and extraordinary costs.

Credit where it’s due, the complexity merchants have been incredibly effective in one regard: top-shelf marketing discipline.

Over the last ten years, they have worked overtime to make frontend an evidence-free zone. The hucksters knew that discussions about performance tradeoffs would not end with teams investing more in their technology, so boosterism and misdirection were aggressively substituted for evidence and debate. Like a curtain of Halon descending to put out the fire of engineering debate, they blanketed the discourse with toxic positivity. Those who dared speak up were branded “negative” and “haters”, no matter how much data they lugged in tow.

Sandy Foundations #

It was, of course, bullshit.

Astonishingly, gobsmackingly effective bullshit, but nonsense nonetheless. There was a point to it, though. Playing for time allowed the bullshitters to punt introspection of the always-wrong assumptions they’d built their entire technical ediface on. In time, these misapprehensions would become cursed articles of faith:

  • CPUs get faster every year

    [ narrator: they do not  ]

  • Organisations can manage these complex stacks

    [ narrator: they cannot  ]

All of this was falsified by 2016, but nobody wanted to turn on the house lights while the JS party was in full swing. Not the developers being showered with shiny tools and boffo praise for replacing “legacy” HTML and CSS that performed fine. Not the scoundrels peddling foul JavaScript elixirs and potions. Not the managers that craved a check to write and a rewrite to take credit for in lieu of critical thinking about user needs and market research.

Consider the narrative Crazy Ivans that led to this point.

By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling <a href='/2022/12/performance-baseline-2023/'>inequality-exascerbating technology</a> as a cure-all tonic.
By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling inequality-exascerbating technology as a cure-all tonic.

It’s challenging to summarise a vast discourse over the span of a decade, particularly one as dense with jargon and acronyms as the one that led to today’s status quo of overpriced failure. These are not quotes, but vignettes of distinct epochs in our tortured journey:

  • “SPAs are a better user experience, and managing state is a big problem on the client side. You’ll need a tool to help structure that complexity when rendering on the client side, and our framework works at scale”

    illustrative example  ]

  • “Instead of waiting on the JavaScript that will absolutely deliver a superior SPA experience…someday…why not render on the server as well, so that there’s something for the user to look at while they wait for our awesome and totally scalable JavaScript to collect its thoughts?”

    an intro to “isomorphic javascript”, a.k.a. “Server-Side Rendering”, a.k.a. “SSR”  ]

  • “SPAs are a better experience, but everyone knows you’ll need to do all the work twice because SSR makes that better experience minimally usable. But even with SSR, you might be sending so much JS that things feel bad. So give us credit for a promise of vapourware for delay-loading parts of your JS.”

    impressive stage management  ]

  • “SPAs are a better experience. SSR is vital because SPAs take a long time to start up, and you aren’t using our vapourware to split your code effectively. As a result, the main thread is often locked up, which could be bad?

    Anyway, this is totally your fault and not the predictable result of us failing to advise you about the controls and budgets we found necessary to scale JS in our environment. Regardless, we see that you lock up main threads for seconds when using our slow system, so in a few years we’ll create a parallel scheduler that will break up the work transparently*”

    2017’s beautiful overview of a cursed errand and 2018’s breathless re-brand  ]

  • “The scheduler isn’t ready, but thanks for your patience; here’s a new way to spell your component that introduces new timing issues but doesn’t address the fact that our system is incredibly slow, built for browsers you no longer support, and that CPUs are not getting faster”

    representative pitch  ]

  • “Now that you’re ‘SSR’ing your SPA and have re-spelt all of your components, and given that the scheduler hasn’t fixed things and CPUs haven’t gotten faster, why not skip SPAs and settle for progressive enhancement of sections of a document?”

    “islands”, “server components”, etc.  ]

The Steamed Hams of technology pitches.

Like Chalmers, many teams and managers acquiesce to the contradictions embedded in the stacked rationalisations. Dozens of reasons to look the other way were invented, from the marginal to the imaginary.

But even as the complexity merchant’s well-intentioned victims merchants meekly recite the koans of trickle-down UX — it can work this time, if only we try it hard enough! — the evidence mounts that “modern” web development is, in the main, an expensive failure.

The baroque and insular terminology of the in-group is a clue. It’s functional purpose (outside of signaling) is to obscure furious plate spinning. This tech isn’t working for most adopters, but admitting as much would shrink the market for lemons.

You’d be forgiven for thinking the verbiage was designed obfuscate. Little comfort, then, that folks selling new approaches must now wade through waist-deep jargon excrement to argue for the next increment of complexity.

The most recent turn is as predictable as it is bilious. Today’s most successful complexity merchants have never backed down, never apologised, and never come clean about what they knew about the level of expense involved in keeping SPA-oriented technologies in check. But they expect you’ll follow them down the next dark alley anyway:

An admission against interest.
An admission against interest.

And why not? The industry has been down to clown for so long it’s hard to get in the door if you aren’t wearing a red nose.

The substitution of heroic developer narratives for user success happened imperceptibly. Admitting it was a mistake would embarrass the good and the great alike. Once the lemon sellers embed the data-light idea that improved “Developer Experience” (“DX”) leads to better user outcomes, improving “DX” became and end unto itself. Many who knew better felt forced to play along.

The long lead time for falsifying trickle-down UX was a feature, not a bug; they don’t need you to succeed, only to keep buying.

As marketing goes, the “DX” bait-and-switch is brilliant, but the tech isn’t delivering for anyone but developers. The goal of the complexity merchants is to put your brand on their marketing page and showcase microsite and to make acqui-hiring your failing startup easier.

Denouement #

After more than a decade of JS hot air, the framework-centric pitch is still phrased in speculative terms because there’s no there there. The complexity merchants can’t cop to the fact that management competence and lower complexity — not baroque technology — are determinative of end-user success.

By turns, the simmering embarrassment of a widespread failure of technology-first approaches has created new pressures that have forced the JS colporteurs into a simulated annealing process. In each iteration, they must accept a smaller and smaller rhetorical lane as their sales grow, but the user outcomes fail to improve.

The excuses are running out.

At long last, the journey has culminated with the rollout of Core Web Vitals. It finally provides an effortless, objective quality measurement prospective customers can use to assess frontend architectures. It’s no coincidence the final turn away from the SPA justification has happened just as buyers can see a linkage between the stacks they’ve bought and the monetary outcomes they already value, namely SEO. The objective buyer, circa 2023, will understand heavy JS stacks as a regrettable legacy, one that teams who have hollowed out their HTML and CSS skill bases will pay dearly for in years to come.

No doubt, many folks who now know their web stacks are slow and outdated will do as Akerlof predicts, and work to obfuscate that reality for managers and customers for as long as possible. The market for lemons is, indeed, mostly a resale market, and the excesses of our lost decade will not be flushed from the ecosystem quickly. Beware tools pitching “100 on Lighthouse” without checking the real-world Core Web Vitals results.

Shrinkage #

A subtle aspect of Akerlof’s theory is that markets in which lemons dominate eventually shrink. I’ve warned for years that the mobile web is under threat from within, and the depressing data I’ve cited about users moving to apps and away from terrible web experiences is in complete alignment with the theory.

More prosaically, when websites feel like worse experiences to those who greenlight digital services, why should anyone expect them to spend a lot to build a website? And when websites stop being where most of the information and services are, who will hire web developers?

The lost decade we’ve suffered at the hands of lemon purveyors isn’t just a local product travesty; it’s also an ecosystem-level risk. Forget AI putting web developers out of jobs; JS-heavy web stacks have been shrinking the future market for your services for years.

As Stigliz memorably quipped:

Adam Smith’s invisible hand — the idea that free markets lead to efficiency as if guided by unseen forces — is invisible, at least in part, because it is not there.

But dreams die hard.

I’m already hearing laments from folks who have been responsible citizens of framework-landia lo these many years. Oppressed as they were by the lemon vendors, they worry about babies being throw out with the bathwater, and I empathise. But for the sake of users, and for the new opportunities for the web that will open up when experiences finally improve, I say “chuck those tubs”. Chuck ’em hard, and post the photos of the unrepentant bastards that sold this nonsense behind the cash register.

Anti JavaScript JavaScript Club

We lost a decade to smooth talkers and hollow marketeering; folks who failed the most basic test of intellectual honesty: signposting known unknowns. Instead of engaging honestly with the emerging evidence, they sold lemons and shrunk the market for better solutions. Furiously playing catch-up to stay one step ahead of market rejection, frontend’s anguished, belated return to quality has been hindered at every step by those who would stand to lose if their false premises and hollow promises were to be fully re-evaluated.

Toxic mimicry and recalcitrant ignorance must not be rewarded.

Vendor’s random walk through frontend choices may eventually lead them to be right twice a day, but that’s not a reason to keep following their lead. No, we need to move our attention back to the folks that have been right all along. The people who never gave up on semantic markup, CSS, and progressive enhancement for most sites. The people who, when slinging JS, have treated it as special occasion food. The tools and communities whose culture puts the user ahead of the developer and hold evidence of doing better for users in the highest regard.

It’s not healing, and it won’t be enough to nurse the web back to health, but tossing the Vercels and the Facebooks out of polite conversation is, at least, a start.

from Sidebar https://infrequently.org/2023/02/the-market-for-lemons/

A Field Guide to AI in the Metaverse



By 2030, each of these technologies; AI, XR & Blockchain, will be fully integrated into the Metaverse and each will create massive value for businesses and consumers alike. Learning about and leveraging these new tools will allow the Metaverse to be created not just by programmers, developers, and 3D artists, but by everyone. (keep reading to make your own!)

“With AI in the Metaverse, everyone will be a creator.”

This article will cover Artificial Intelligence exclusively and its importance to the future of the Metaverse.

  1. Generative AI (Text, Audio & Image)
  2. NeRF — 3D spatial capture
  3. Computer Vision & SLAM
  4. Natural Language Processing & Conversational AI
  5. Automatic Content Creation (3D)

AI in the Metaverse holds the power to unleash unlimited creativity while ensuring everyone has equal opportunities. Many will see these technologies as a replacement for human labor, and for some roles, this will certainly be true, but more likely we will adapt to doing much more with much less, which will be required as we enter the exponential age of humanity. With Generative AI, the biggest thing to note is that while the neural networks they use to create novel content are trained on open data sets scraped from the internet. The work they create is not derivative but original. Every piece of content they generate, whether audio, text, video, or images is a novel creation based on billions of training data points scraped from the internet.

“We are entering the exponential age of humanity”

Before you continue this article I want you to understand two things;

  1. AI Changes Everything.
  2. AI is Already Here and it’s not going away.

‘AI has Huge implications in the Metaverse’ — TIME

The Metaverse consists of the collection of media including video, audio, and text that we see in the current iteration of the internet plus three groups of technologies; AI, XR, and Blockchain. If for no other reason than that Ryan Reynolds is already using AI and incredible art like the video above is being made, you should be paying attention.

Generative AI (Text & Image)

Let’s start with the most common and understood; Generative AI Interfaces based on GPT (Generative Pre-Trained Transformer) algorithms, the most well-known being ChatGPT. These generative AI models use massive datasets and scrape the internet for data. Based on simple text input, these AI platforms can create incredibly valuable responses that can be used for:

  • Search: AI-powered insights. Google AI, ChatGPT, OpenAI
  • Text: Summarizing or automating content. GPT3/4, ChatGPT, Open AI
  • Images: Generating images. Midjourney, DALL-E, Stable Diffusion
  • Audio: Summarizing, generating or converting text in audio. Play.ht, Clipchamp, Soundraw
  • Video: Generating or editing videos. Synthesia, VEED.io
  • Code: Generating code: ChatGPT, GitHub Co-Pilot, IntelliCode, PyCharm, Jedi
  • Chatbots: Automating customer service and more. Zendesk, Ada, DeepConverse
  • Natural Language Processing (NLP): InWorldAI, Synthesia, MindMeld
  • Computer Vision: HawkEye, VisoAI, DeepMind, SenseTime
  • Simultaneous Location & Mapping (SLAM): Apple, PTC, Snap, Niantic, Meta
  • Machine Learning (ML): NVIDIA, Microsoft, iTechArt, Meta
  • Suggestion Algorithms: Google, Amazon, Microsoft, Netflix

Let’s do a text one together

STEP 1: Go to chat.openai.com/chat— wait for a free server

STEP 2: Enter Prompt — ‘Write a fun dad joke about AI.’

OUTPUT: Why was the AI feeling cold? Because it left its algorithm open!

STEP 3: Laugh — Either at how dumb the joke is or how amazing how instant the response was, but either way, it truly is amazing.

STEP 4: Try a bunch of work-related tasks you need done asap. (ie. Write an article about…Give 10 examples of….Write a marketing strategy for…)

Let’s make an image using Midjourney

STEP 1: Register at Midjourney

STEP 2: Get on the Midjourney Discord Server

STEP 3: Find a room to submit your query.

STEP 4: Prompt the following — /Imagine Dragon hanging on a castle high resolution photoreal, fire breathing — ar 3:2

NOTE: Imagine is required to start the prompt (not part of the band)

STEP 5: Choose the one you like and click V# you want to see 4 more versions

STEP 6: Choose the best one and click U# to upscale the image

OUTPUT:

Here is a little more reading for a deeper understanding of Who Owns the Generative AI Platform? from a16z and McKinsey’s “What is Generative AI?” You can also read this PBS Special on How AI Turns Texts into Images.

Without getting too philosophical on this subject, generative AI holds the potential to fundamentally change the fabric of society. Imagine when AI not only defends you in court, but also drafts (and passes) laws. AI is already being used by governments to decide who gets welfare and who doesn’t…and many times, it gets it wrong! Imagine your grandmother being denied medical coverage because an algorithm decided she was not worth saving. What other business models and social constructs will be upended? If everyone uses AI to create content, there will be unintended consequences, but the value this nascent technology will create cannot be overstated.

NeRFs (Neural Radiance Fields)

Let’s move to another subset of AI known as NeRFs, not related to the foam missiles you fire at your younger brother, but Neural Radiance Fields, a complex field of study that uses computer vision from a regular RGB camera to capture video and translate it into volumetric 3D renders you can import into 3D platforms and view spatially. NeRFs are not just a better way to turn scans of real-world places into 3D orders of magnitude faster than current LiDAR solutions at a fraction of the cost by using a smartphone camera vs. $50–100K scanner. AI also takes the information and fills in the blanks to create a realistic and believable virtual version of physical space. These virtual models of the real world will help us populate spaces in the Metaverse quickly and easily, making everyone a creator.

These digital replicas of the real world will help us build shared spaces in the metaverse quickly and easily, extending real-life social networks and accelerating mainstream adoption.

For those like me who don’t understand the above diagram, there is a more simplified explanation here: “NeRF or better known as Neural Radiance Fields is a state-of-the-art method that generates novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. The input can be provided as a Blender model or a static set of images. Basically, wave your phone around and voila, you have a 3D volumetric capture (or at least that is the promise).

NVIDIA getting started with NeRFs guide (for advanced programmers)

Note: There are no really easy ways to do this currently, but if you want to go deep, here is a video that explains how to make a NeRF in the easiest way I have found thus far (warning, it’s hard!).

You can also download a program called Polycam 3D for iPhone or Android and start 3D scanning objects and/or scenes for use in platforms such as MetaVRse or Unity.

Computer Vision & SLAM

Computer vision (CV) is the field of computer science that focuses on replicating the complexity of the human visual system and enabling computers to identify and process objects in images and videos in the same way that humans do. Imagine how autonomous cars see and how VR headsets understand what is around you.

Simultaneous Location and Mapping (SLAM) is a form of computer vision that allows your phone to map and understand your surroundings in order to display 3D content in your space. Built into your mobile device are several sensors (Accelerometer, Gyroscope, LiDAR scanner) that, in addition to what the RGB cameras see, provide context in terms of position in the X,Y,Z or 6-Degree of Freedom (6-DOF) space. This allows your phone to understand where the floor is and simultaneously project content into augmented reality.

As CV technology continues to advance, the possibilities will expand from autonomous vehicles, robots, and drones to augmented reality that looks as real as real.

Some of the capabilities of CV and SLAM include object recognition and tracking (think tracking a real-world object while projecting digital information on top of it).

Natural Language Processing & Conversational AI

Natural Language Processing (NLP) is a field of AI that focuses on the interaction between computers and human language. It involves the use of algorithms and statistical models to analyze, understand, and generate human language. NLP is used in a wide range of applications such as language translation, text-to-speech, sentiment analysis, and more.

Conversational AI is a subfield of NLP that focuses on creating human-like interactions between computers and humans using natural language. This can include chatbots, virtual assistants and voice assistants. The goal of conversational AI is to create a seamless and natural communication experience for users. This can be achieved through the use of advanced NLP techniques such as natural language understanding and generation, as well as machine learning and deep learning.

Automatic Content Creation

Nothing says AI like automation. These tools allow you to say what it is you want to create and voila, it is there, in 3D! While there will be a ton of these tools in the Metaverse, this is the first one that we know of that works. This slideshow will give you a much deeper understanding of how this technology will revolutionize gaming. Even music is being created by AI now. Give it a try yourself at Anything World.

Check out this cool 3D object created completely by AI on LumaLabs.

To learn more about new cutting-edge technologies like GET3D from NVIDIA, Make-a-Video from Meta, and DreamFusion from Google, follow Two Minute Papers on YouTube.

As you can see, this is the future and while it is not quite ready for prime time, researchers are using AI to solve for AI so it won’t be long before this becomes how we build every virtual world in the Metaverse.

Generative AI Startup Landscape:

Well, there you have it, a pretty comprehensive look at the artificial intelligence algorithms that will directly impact and hopefully benefit you in the Metaverse.

Alan Smithson is co-founder of MetaVRse.

Header image credit: Midjourney


More from AR Insider…

from AR Insider https://alan-smithson.medium.com/practical-guide-to-ai-in-the-metaverse-583020bbe61f

How to ask questions like a UX Researcher

Tips for asking good, engaging, and productive questions

Two people sitting at the table, one person asking question to the other
Stock Image from Pexels Contributor Alex Green

As researchers we love a good question, but what’s the craft behind asking questions? In short: beware leading questions, ask one question at a time, and manage the flow. And as a last resort, ask less and listen more.

I mean, everyone asks questions every day, right? So who doesn’t know how to ask questions? Do you ask clear and insightful questions? Perhaps most importantly, do you want to learn how to ask good questions?

Well, actually, these are all horrible questions. They are leading. They come off as a bombardment. And they create zero space for nuanced discussions.

Asking questions is a craft that is at the heart of what UX Researchers do. It is complex, and comes with a rich and eclectic literature inspired by sociology, psychology, and other social sciences. But asking questions can also be very easy. Just don’t be the dummy that wrote the first paragraph and instead avoid the 3 common traps of asking questions.

Tip no. 1: Avoid leading questions at all cost

An attorney standing up in court, declaring “objection, leading the witness”
The lawyers know it!

A leading question suggests the answer to be given, and makes you feel pressured to answer a certain way.

Much like in a court room, we want the truth the whole truth and nothing but the truth from user interviews. However, the challenge is that, as international super star Lizzo sagely observed, “Truth Hurts.” Honest feedback can sting. And with a natural inclination for psychological safety, any reasonable person under pressure would say whatever they believed the listener wanted to hear, even at the expense of truth. So without a mandate or an oath, how might we get honest reactions and answers?

The key is to avoid leading questions. Let go. Play dumb. Be curious. Judge none. Any of these strategies would help create a safe environment where the participant might feel secure enough to disclose their deepest and darkest secrets. And sometimes, when an interview goes really well, participants would share their sharpest stretch of mind which would surely shatter any preconceived assumptions.

Imagine how you would respond to the opening question: “Everyone asks questions every day, right?” You’re sitting in front of a computer, talking to the interviewer for the first time via Zoom. No clue about the interviewer’s character other than their interesting fashion choice. With such a leading question, would you challenge the interviewer and confess the truth? Something along the lines of: “No actually, your premise for the question is miserably wrong. Yesterday, I didn’t speak to a living soul the entire day. And let me tell you, it was a delight. So no, I don’t ask questions every day; nor do I want to.”

That’s why it’s important to avoid leading questions — to allow participants to speak their mind. According to sociologist Robert Weiss whose book Learning from Strangers sits atop the syllabus of virtually every PhD-level Qualitative Methods Seminar, “the interview relationship is a research partnership between the interviewer and the respondent.“ And like any well-functioning relationship, there shall be no strong-arming of opinions.

Interested to read more? Check these out:

Tip no. 2: Avoid double-, triple-, quadruple- barreled questions

Interviewer saying “I don’t remember the question” while shaking her head
What was the other part of your question again?

A double-barreled (compound) question comes with multiple parts, and expects you to give detailed responses while remembering the many parts of the question.

And it’s just too hard. You’re staring at the camera above the screen, trying to give the interviewer your undivided attention, and in exchange, you receive a barrage of questions too many to count. “Ugh, I guess. I mean, it’s probably fair to say everyone asks questions, right, though I didn’t say a single word yesterday. But regardless, by extension of the premise, everyone should have a little experience with asking questions. And… and.. what was your question again?”

The problem with asking compound questions is that the pace of query far exceeds the regular processing time of the human mind. Unless you have mastered the craft of human parallel computing, answering many questions at a time is outright overwhelming. You’re faced with a tradeoff, between developing a full answer to the first question and zipping through every question to check off the list. Neither is optimal. Especially when you respond to a multi-part question with a multi-part answer, it leaves no room for the researcher to process, unpack, and dig deeper.

In the worst case, double-barreled questions can lead to inaccurate results. Take the third question in the opening paragraph as example: “Do you ask clear and insightful questions?” How would you respond? What if you’re good at making yourself understood but struggle to navigate the domain context? When I first started at Stavvy, I felt confident about asking questions clearly, but little did I know about the real estate industry. So to the double-barreled question, I would probably say, “Um.. yes?” But that can’t be further from the truth.

By splitting up many queries into singular questions, we ensure that each question is addressed with the adequate time it deserves, and as a result get accurate answers from interviews.

Interested to read more? Check these out:

Tip no. 3: Avoid rigidly going down a list of questions

Housewife Lisa looking up saying “This is so awkward”
When the interviewer reads off of a list of questions

Let’s talk about structure and flow. Ideally, an interview should be a conversation, not an interrogation. As research partners, the interviewer and the participant should move from one topic to another as they both see fit.

There is a spectrum when it comes to interview structure. On one end, there is “structured interview,” where each participant is asked exactly the same questions in the same order with no spontaneous follow-ups. On the other end, there is “unstructured interview,” where everything is free-form, and the interviewer might ask drastically different questions from one participant to another.

While there is a specific use case for structured interviews, the rigid format has many limits. Most prominently, it relinquishes the power of conversation and instead acts as a verbal survey. Not only does the format produce awkward transitions between questions, it also fails to explore the exciting ideas brought up by the participant. This type of interview is a one-way street, not a two-way collaboration.

Similarly, unstructured interviews have their faults, too. While the conversation might flow, the data collected from such sessions will likely vary in its topic coverage, making the analysis challenging, if not impossible.

So what UX Researchers tend to prefer is a format somewhere in between: semi-structured interviews. Have a list of topics to explore, but follow the lead of the participant. Make questions increasingly specific as you dig deeper. I think of the process as exploring a cave with crystals.

An illustration of crystal cave, with clustered of yellow crystals labeled in groups 1 through 7. Caption says: Explore, Guide, Dig
Crystal cave of a mental landscape, illustrated by yours truly

Much like following the crystals in a cave, conducting semi-structured interviews is about following the mental landscape of the participant. There might be clusters of crystals (thoughts) that exist close to each other. So explore 1, 2, and 3 together before pivoting to 4. When in 4, allow the sight of one crystal take you to its neighbor, hence appreciating the cluster of thoughts in its full magnificence. No point in bringing up 7 out of the blue.

The shape of the crystal also matters. The tippy points of the crystals can be thought of as the points of investigation, specific questions like “are participants able to find the action menu tucked away behind the icons?” We want to meet the participants there. But never arrive by asking leading questions. Instead, with each following question, get a little bit more specific, much like the shape of the crystal getting more pointed as it gets closer to the ground. This way, we avoid projecting ideas onto participants while also eventually getting to the points of investigation.

Bonus tip: Sometimes it’s better to not ask questions

Kyle speaking with emphatic hand gestures: “shut up and listen”
Silence is my secret weapon!

If the three traps of asking questions feel a little bit tricky to navigate, there is always the option to just stay present and listen. I’m not being snarky. Less is more when it comes to asking questions.

If we’re here to establish a partnership with research participants, then, much like on a date — man, you gotta listen. Make the participant feel like you are actively listening, and you just might be rewarded with answers to the unknown unknowns. Resist the urge to fill the silence. Allow participants to mull things over. And if you absolutely need to say something, just repeat their last three words in the form of a question.

“Yesterday, I didn’t speak to a living soul the entire day.”

“The entire day?”

“Yeah, I didn’t have any meetings so I didn’t have to talk to anyone. I kind of like that.“

“Kind of like that?”

“Yeah, it’s just so much peace and productivity. I feel like not everyone has to ask questions all the time. It’s probably just a researcher thing.”

“A researcher thing?”

“Yeah, like, I mean I probably don’t want to spend my day thinking about how to ask questions. I’m happy to just read about it in a blog post.”

This technique, referred to by master negotiator Chris Voss as Mirroring, is a powerful tool to stay engaged in a conversation while allowing the maximum space for the other side to tell their story. After all, that’s all interviews are, collecting stories from real people and trying to represent them with faith. That, combined with the ability to avoid leading questions, split up compound questions, and orderly appreciate the crystals clusters in participants’ minds, would enable anyone to ask questions like a UX Researcher.

Want to dig deeper in the craft of UX Research? Here are some of my favorite books:


How to ask questions like a UX Researcher was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

from UX Collective – Medium https://uxdesign.cc/how-to-ask-questions-like-a-ux-researcher-a4e02041136c?source=rss—-138adf9c44c—4

Top 10 Interior Design Posts of 2022

Top 10 Interior Design Posts of 2022

10. A Modern São Paulo Apartment That Embraces Biophilia

Driven by the client’s requests, David Ito Arquitetura designed the modern NCC Apartment in São Paulo, Brazil embracing biophilic design through plants, light, and materials, while creating sophisticated social areas and cozy private spaces.

modern living room with fireplace and maroon home furnishings

9. A Seattle Home Made of Glass Boxes Lives Amongst the Trees

Longing to tear down the original home, clients requested a new, larger home in the same spot that evoked the same feelings as the one demolished. Taking on the task was Sandall Norrie Architects, in collaboration with Swivel Interiors, who designed the Modern Treehouse to live amongst the lush greenery while optimizing views of Lake Washington and Mt. Rainer.

modern living room interior of Vy Yang with light green sectional and two wood bookcase flanking fireplace

8. After: “Coastal Grandma Meets Graphic Designer” Is the Vibe of My New Living Room

After sharing the beginnings of her living room refresh project – layouts, schemes, mood board, cringey before photos (see below!), our Lifestyle Editor Vy Yang shared the final results of her living room refresh with Alex Yeske Interiors, along with an in depth look at the choices of home furnishings and decor!

modern renovated home interior with white walls and colorful quirky furnishings

7. A 17th Century Weaver’s House Transforms Into a Modern Home in Amsterdam

Benthem Crouwel Architects transformed a 17th century weaver’s house in Amsterdam into a modern family home called the Vijzelgracht House. The results are a home built for the future while honoring its rich past full of character with the help of bright colorful furnishings and unique standouts like dichroic glass panels.

modern yet classical living room design with pale green sofa and round rug

6. Worrell Yeung + Colony Pair up to Transform Historic New York City Loft

Worrell Yeung and Colony paired up to transform a historic loft overlooking Union Square in New York City. The project included renovating the 3,000-square-foot, triangular-shaped space into a contemporary residence suited for its homeowners, one of which is a concert violinist, that can also act as a recital space to entertain guests.

small apartment interior with a multifunctional structure housing bed and storage

5. How Much Function Can Be Added to a 462-Square-Foot Apartment?

Woon Chung Yen of Metre Architects designed a multipurpose structure to reside in the open space of a compact apartment in Singapore. The layered structure contains a bed, a comfy seating area for TV watching, a desk/table for working, eating, or entertaining, stairs to reach the bed, and tons of storage.

angled interior view of modern glass house with open living space

4. A California House Topped With Glass Pavilion + Angular Roof

With panoramic views of the Hollywood sign and the mountains, the California House rests on a steep plot of land in the Hollywood Hills. GLUCK+ were tasked with designing a house on the challenging lot without disrupting the landscape as much as possible. The interior, with its eclectic selection of furnishings, feels open and airy thanks to its glass pavilion-like design.

modern hotel interior bedroom in neutral colors

3. Vipp Opens New One-Room Hotel in Old Pencil Factory in Copenhagen

Marking Vipp’s 6th hotel, the Vipp Pencil Case is located in an old pencil factory right across the bridge from the heart of the city. The one-of-a-kind hotel comprises a single, 90-square-meter room housed within a 1930s Bauhaus-inspired building. The light-filled, ground floor hotel room is the result of a one-year renovation by interior designer, Julie Cloos Mølsgaard, who created a cozy retreat for the design-focused traveler.

living room scheme

2. Before: I Asked an Influencer to Design My Living Room and She Didn’t Disappoint

You saw the results of our Lifestyle Editor Vy Yang’s living room redesign above at #8 and here you can get the lowdown on the initial stages of the project, including working with her interior design, design schemes, furnishing selections, layouts, and more!

And the most popular interior design post of 2022 is…

quirky modern interior with white walls and colorful furnishings

1. A Colorful + Dreamy, Space-Age Inspired Apartment in Ho Chi Minh City

Designed for a couple working in the art world, this dreamy and colorful apartment has surprises around every corner. Red5studio designed the Dreamscape Apartment in Ho Chi Minh City with no straight lines or hard angles. Instead, the quirky apartment showcases curves from every vantage point. While the surfaces are white, the furnishings are anything but. A fresh palette of colorful furniture pieces and accessories creates a playful and dreamy retreat the family can call home.

from Design Milk https://design-milk.com/top-10-interior-design-posts-of-2022/

10 digital twin trends for 2023


Interest in digital twins has picked up over the last year. Digital twin tools are growing in capability, performance and ease of use. They are also taking advantage of promising formats like USD and glTF to connect the dots among different tools and processes.

Advances in techniques for combining models can also improve the accuracy and performance of hybrid digital twins. Generative AI techniques used for text and images may also help create 3D shapes and even digital twins. These kinds of advances will allow enterprises to mix and match modeling capabilities in new ways and for new tasks. 

Here are 10 trends to watch for in the year ahead. 

1. From connecting files to connecting data

Over the last several years, all the major tools for designing products and infrastructure have been moving to the cloud — but still using legacy file formats to exchange data. Increasingly vendors are calling out the data integration aspects of these tools that make it easier to share digital twins across different tools and services.

Event

GamesBeat Summit: Into the Metaverse 3

Join the GamesBeat community online, February 1-2, to examine the findings and emerging trends within the metaverse.


Register Here

This capability often starts as a subset of a vendor’s tools. For example, Siemens is rebranding a new subset of its tools as part of Siemens Xcelerator, while Bentley has launched Phase 2 of the infrastructure metaverse. In November, location intelligence leader Trimble launched Trimble One, a “purpose-built connected construction management offering that includes rich field data, estimating, detailing, project management, finance and human capital management solutions.” 

It’s one thing to move apps to the cloud simply. These innovators are doing something else: pioneering more efficient ways to connect data across these apps. Over the next year, the other major construction and design tools providers will likely announce similar advances for connecting digital twins and digital threads across different processes. 

2. Entertainment firms target the industrial metaverse

Epic and Unreal have made significant progress partnering with digital-twin leaders to provide a better user experience across devices. These companies have announced significant partnerships with GIS, construction and automobile leaders. 

Blackshark AI developed the globe behind Microsoft’s latest flight simulator, and went on to scale the tech for automatically transforming raw satellite imagery into labeled digital twins. In April, Maxar, a leading satellite imaging provider, announced a significant investment in Blackshark for Earth-scale digital twins

Over the next year, more gaming and entertainment companies will find opportunities in the industrial metaverse, which ABI expects to eclipse the consumer metaverse over the next several years.  

3. Nvidia galvanizes support for USD

Pixar pioneered the Universal Scene Description (USD) format to improve movie production workflows. Nvidia has championed USD to connect the dots across various digital twins and industrial metaverse use cases. The company has built connectors to the IFC standard for buildings, and is improving workflows for Siemens in industrial automation and Bentley in construction.

USD still lacks support for physics, materials and rigging, but despite its limitations, there is nothing better for organizing the 3D information for giant digital twins. Nvidia’s pioneering work on USD promises to integrate raw data with various industry, medicine and enterprise workflows. 

4. glTF simplifies digital-twin exchange

There is growing momentum behind the glTF file format for exchanging 3D models across different tools. The Khronos Group calls it the JPEG for the metaverse and digital twins. Expect gITF to pick up steam, particularly as creators look for an easy way of sharing interactive 3D models across tools. 

5. Generative AI meets digital twins

Over the last year, the world has been wowed by how easy it is to use ChatGPT to write text and Stable Diffusion to create images. Meanwhile, others have demonstrated new multimodal tools like DeepMind’s Gato for harmonizing models across text, video, 3D and robotic instructions. Over the next year, we can expect more progress in connecting generative AI techniques with digital twin models for describing not only the shape of things but how they work.

Yashar Behzadi, CEO and founder of Synthesis AI, a synthetic data tools provider, said, “This emerging capability will change the way games are built, visual effects are produced and immersive 3D environments are developed. For commercial usage, democratizing this technology will create opportunities for digital twins and simulations to train complex computer vision systems, such as those found in autonomous vehicles.”

6. Hybrid digital twins 

There are a variety of performance, accuracy and use case tradeoffs among the models used in digital twins. Prith Banerjee, CTO of Ansys, believes that in 2023 enterprises will find new ways to combine different approaches to hybrid digital twins.

Hybrid digital twins make it easier for CIOs to understand the future of a given asset or system. They will enable companies to merge asset data collected by IoT sensors with physics data to optimize system design, predictive maintenance and industrial asset management. Banerjee foresees more and more industries adopting this approach with disruptive business results in the coming years. 

For example, a healthcare company can develop an electrophysiology simulation of a heartbeat as the muscles contract, the valves open and the blood flows between the heart’s chambers. The company can then take a patient’s MRI scan and develop a simulation of that specific individual’s heart and how it would react to the insertion of a particular pacemaker model. If this R&D work is successful, it could help medical device and equipment companies invent new products and apply for FDA trials by demonstrating in-silico trials. 

7. FDA modernization act replaces animals with silicon

Animal testing has been a requirement for all new drugs and treatments since the FDA’s early days. This year, the U.S. Congress passed the FDA Modernization Act 2.0, allowing pharmaceutical companies to replace animal testing with in-vitro and in-silico methods. This will drive innovation and commercialization of patients-on-a-chip and better medical digital twins for testing more cost-effectively and humanely. 

Tamara Drake, director of research and regulatory policy at the Center for Responsible Science, told VentureBeat, “We believe in-silico methods, including use of artificial intelligence in conjunction with advance organs on a chip, or patient-on-a-chip, will be the biggest trend in drug development in coming years.”  

8. Digital twin ecosystems open new use cases

Matt Barrington, emerging technology leader at EY Americas, predicts that digital twins will increasingly transform how we run companies in 2023. For example, using a digital market twin to evaluate new products will support management and strategic decision-making. Digital twins will also underpin supply chain resilience in uncertain times, and improve risk management, safety and sustainability.

This transformation will require increased emphasis on foundational digital capabilities in data management and devops for data engineering, as well as a more comprehensive approach to security. Barrington predicts fragmentation and a high degree of specialization in the market, such that no single vendor has an end-to-end digital twin solution. Companies will have to integrate several capabilities to create the right fit-for-purpose solution for their business. Part of that approach will require more composable, open architectures and the ability to curate an ecosystem-based system. 

9. Enterprise digital twins take off

Vendors have made significant advances in tools for process mining and process capture to create a digital twin of the organization

Bernd Gross, CTO at Software AG, said these advances allow enterprises to create simulations for an entire department or a cluster of business processes rather than a single business process. 

Leaders will find ways to incorporate various technologies, such as process mining, risk analysis and compliance monitoring, to drive more accurate outcomes. These techniques require greater breadth and depth of data. Today, enterprises must include relevant KPIs, causalities between processes, the life cycle of a business unit and more to create a genuinely accurate enterprise digital twin. 

10. Digital twins drive 5G

5G delivers significantly faster speeds in direct view of one of the newer towers, but can be slower than 4G in the radio shadow zone. Cellular service providers are engaged in a race to fill in these shadows, and digital twins could help. Fortune Business Insights estimates that the market for 5G cells could grow by 54.4% annually through 2028.

Mike Flaxman, spatial data science lead at Heavy AI, said many telcos are looking at digital twins to shift to a plan, build, and operate model that allows them to maximize service while cutting costs.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

from VentureBeat https://venturebeat.com/programming-development/10-digital-twin-trends-for-2023/

Expanding the Reach of Design Tokens: How to Use Them in Non-UI Design


Graphical Version of Color Palette

Expanding the Reach of Design Tokens: How to Use Them in Non-UI Design

Design Tokens: The Secret to Consistency Beyond the User Interface

An organization can use design tokens to ensure consistency and coherence across all of its design decisions, not just those related to the user interface (UI). As we’ll see in this post, design tokens can be used to make a wide variety of design elements, including PowerPoint presentations, flyers, ads, and even a company’s printer and other printed materials, more consistent and high quality. But let’s discuss some basics for the people who are new to the concept of design tokens.

What are design tokens?

A design token is a variable that represents a core design element in a system, such as color, typography, spacing, and other interactive and visual properties. Designers and developers can easily access and use these elements as tokens throughout the design process, ensuring that each design decision is consistent with the overall design scheme.

Why use design tokens?

  • Using design tokens has several benefits. First and foremost, it ensures consistency in the design of a product or brand. All elements of the product or brand can have a cohesive and harmonious look and feel by using the same set of design tokens throughout the design process.
  • As well as improving consistency, design tokens can increase the efficiency of the design process. Defining design elements as tokens allows designers to easily access and use them, eliminating the need to create common elements from scratch. For larger design projects, this can save a great deal of time and effort.
  • Finally, design tokens can enhance the maintainability of a design system. By using tokens to represent core design elements, designers can easily update the system to reflect changes in branding or design direction. To stay on top of changing trends or market needs, this can be particularly useful for companies.

Tokens beyond UI design

Design tokens are frequently used in UI design, but they can also be used in PowerPoint presentations, flyers, ads, PDFs, and even physical materials like company printers.

In a PowerPoint presentation, for example, design tokens can be used to define colors, typography, and other visual elements. Regardless of who is creating the company’s presentations, the company can maintain a consistent and professional appearance. A design token can be used to create a cohesive and unified brand image on flyers, ads, and other promotional materials.

Even physical materials like company printers, business cards, and other branded items can be designed consistently with design tokens. By defining the colors, typography, and other design elements as tokens, a company can ensure that all of its physical materials have a consistent and professional appearance, regardless of where or when they are produced.

For a company to use design tokens in their PowerPoint presentations, here are some steps that they can follow:

  1. You must define your design tokens in advance. To start with, you need to define the core design elements that you wish to include as tokens. For example, you might want to include colors, typography, spacing, and other visual elements as tokens.
  2. Once you’ve defined your design tokens, you need to create a library to store them. This can be a simple spreadsheet or a more complex tool like a design system platform like Figma. But I’d recommend making your design tokens available on a platform that is more accessible to non-designers as well.
  3. Make sure that you utilize design tokens in your PowerPoint templates when you create PowerPoint templates for your company to ensure consistency and coherence. For example, if you have defined a particular color as a design token, then be sure that that color is used throughout the template consistently.
  4. The design token library should be updated as needed as you work on PowerPoint templates. If you change the direction of the design while working on PowerPoint templates, you may have to update the design token library. To ensure that all design decisions are consistent with the overall design system, it is imperative to keep the library up to date as much as possible.
  5. Whenever employees are creating presentations, encourage them to use the company’s PowerPoint templates to ensure that all presentations have a consistent look. As a result, all presentations will follow the design system defined by the design tokens, to ensure consistency.

The above steps will assist a company in using design tokens to define the colors, typography, and other visual elements of their PowerPoint presentations, helping to create a cohesive and unified brand image for the company.

Other creative ways to use tokens

  1. Design templates: It is possible to create templates for different types of design work, such as social media posts, email newsletters, and presentation slides, with the help of design tokens, which can help you ensure that all your templates are consistent and professional.
  2. Design marketing materials: Design tokens can also be used to design marketing materials such as flyers, brochures, and ads. This is because they can help create a cohesive and unified brand image across all of your marketing efforts and help to build brand recognition.
  3. Design physical materials: It is also possible to use design tokens to develop physical materials, such as business cards, packaging, branded merchandise, and stationery, so that their appearance is consistent and professional. By using design tokens, you will ensure that all your physical materials are well-designed and look consistent.

Tokens are a powerful tool for ensuring consistency and coherence in user interface design. As long as designers define the core design elements as tokens, and use them consistently during the entire design process, they can create a UI that reflects the unique identity of their brand cohesively and harmoniously. The process of integrating design tokens into user interfaces involves a bit of planning and organization on the part of product designers, but the benefits of improving consistency, efficiency, and maintainability outweigh the effort.

That’s the end of this short yet hopefully insightful read. Thanks for making it to the end. I hope you gained something from it.

👨🏻‍💻 Join my content verse or slide into my DMs on LinkedIn, Twitter, Figma, Dribbble, and Substack. 💭 Comment your thoughts and feedback, or start a conversation!

from Design Systems on Medium https://uxplanet.org/expanding-the-reach-of-design-tokens-how-to-use-them-in-non-ui-design-60aa4a8e87c

Top 10 UX Articles of 2022

The following user-experience articles published in 2022 were the ones our audience read the most:

  1. Data Tables: Four Major User Tasks
    Table design should support four common user tasks: find records that fit specific criteria, compare data, view/edit/add a single row’s data, and take actions on records.
  2. UX Strategy: Definition and Components
    A UX strategy is a 3-part plan that fosters shared understanding of direction toward achieving goals before designing and implementing solutions. It serves to intentionally guide the prioritization and execution of UX work over time.
  3. Best Font for Online Reading: No Single Answer
    Among high-legibility fonts, a study found 35% difference in reading speeds between the best and the worst. People read 11% slower for every 20 years they age.
  4. A Guide to Using User-Experience Research Methods
    Modern day UX research methods answer a wide range of questions. To help you know when to use which user research method, each of 20 methods is mapped across 3 dimensions and over time within a typical product-development process.
  5. Infinite Scrolling: When to Use It, When to Avoid It
    Infinite scrolling minimizes interaction costs and increases user engagement, but it isn’t a good fit for every website. For some, pagination or a Load More button will be a better solution.
  6. Personas vs. Archetypes
    Archetypes and personas used for UX work contain similar insights, are based on similar kinds of data, and differ mainly in presentation. Personas are presented as a single human character, whereas archetypes are not tied to specific names or faces.
  7. Setting UX Roles and Responsibilities in Product Development: The RACI Template
    Use a flexible responsibility-assignment matrix to clarify UX roles and responsibilities, anticipate team collaboration points, and maintain productivity in product development.
  8. Using Grids in Interface Designs
    Grids help designers create cohesive layouts, allowing end users to easily scan and use interfaces. A good grid adapts to various screen sizes and orientations, ensuring consistency across platforms.
  9. Two Tips for Better UX Storytelling
    Effective storytelling involves both engaging the audience and structuring stories in a concise, yet effective manner. You can improve your user stories by taking advantage of the concept of story triangle and of the story-mountain template.
  10. Antipersonas: What, How, Who, and Why?
    Antipersonas help anticipate how products can be misused in ways that can harm users and the business.

Top 10 Study Guides

We launched a new content format: the study guide, which structures our articles and videos about a certain topic, and guides learners to study the topic in the best sequence. These were our 10 most popular study guides in 2022:

  1. UX Writing
  2. Information Architecture
  3. Mobile UX
  4. Design Thinking
  5. Psychology for UX
  6. Design-Pattern Guidelines
  7. Lean UX & Agile
  8. Service Design
  9. Visual Design in UX
  10. Qualitative Usability Testing

Bonus: Top 5 Articles from Last Year

The following articles were published in 2021 but were so popular in 2022 that they would have earned a place in the above list based on this year’s readership numbers alone:

  1. Design Systems 101
    A design system is a set of standards to manage design at scale by reducing redundancy while creating a shared language and visual consistency across different pages and channels.
  2. Using “How Might We” Questions to Ideate on the Right Problems
    Constructing how-might-we questions generates creative solutions while keeping teams focused on the right problems to solve.
  3. Mapping User Stories in Agile
    User-story maps help Agile teams define what to build and maintain visibility for how it all fits together. They enable user-centered conversations, collaboration, and feature prioritization to align and guide iterative product development.
  4. The 6 Levels of UX Maturity
    Our UX-maturity model has 6 stages that cover processes, design, research, leadership support, and longevity of UX. Use our quiz to get an idea of your organization’s UX maturity.
  5. Problem Statements in UX Discovery
    In the discovery phase of a UX project, a problem statement is used to identify and frame the problem to be explored and solved, as well as to communicate the discovery’s scope and focus.

See Also

from NN/g latest articles and announcements https://www.nngroup.com/news/item/top-10-ux-articles-of-2022/

37% of You Are Cutting Your Marketing Budgets For Next Year. But 34% of You Are Increasing Them.

Everyone is being asked to make do with less, one way or another.  Layoffs are back, and hiring freezes are even more back.  And even if growth is strong, folks are getting more conservative.  They want less spend for more output.

And what often gets cut first, and what investors especially push on, is to cut back marketing.  Unless it can get us more deals right now, this month,.

And indeed, that’s what we’re seeing in one of the latest SaaStr polls:

  • 20% of you are really cutting your market budgets hard, 30% or more
  • 17% of you are cutting them a bit, 10%-20%.  This seems pretty common where cash is an issue.
  • 29% of you are keeping it flat.  Which sometimes really is down, if you have to add more pipeline in 2023 with the same budget.  Anf
  • 34% of you are increasing your marketing budget.  This isn’t the majority, but importantly, it is the largest category.

So look, a lot of the theme of SaaStr these days is yes, things are harder than a year ago.  But so they should be.  And Cloud and SaaS are still growing.  So you have to play to your strengths, and your pockets of strengths.

If you sell marketing tools, well, 34% of folks are still increasing their budgets.  Go find them.  Sell better.  Write better outbound emails.  Adjust your pitch.  Really, truly prove your ROI to markets.  Because the customers aren’t gone.  In fact, many are doing better than ever.  It’s just harder.

See also, sales productivity tools, and many other categories.

Budgets aren’t gone.  They’re just under a lot more scrutiny, from a lot more stakeholders.  And perhaps, that’s how it should be.  And every category of buyers has a segment still growing quickly.  Go find them.

The post 37% of You Are Cutting Your Marketing Budgets For Next Year. But 34% of You Are Increasing Them. appeared first on SaaStr.

from SaaStr https://www.saastr.com/37-of-you-are-cutting-your-marketing-budgets-for-next-year-but-34-of-you-are-increasing-them/