The Market for Lemons


For most of the past decade, I have spent a considerable fraction of my professional life consulting with teams building on the web.

It is not going well.

Not only are new services being built to a self-defeatingly low UX and performance standard, existing experiences are pervasively re-developed on unspeakably slow, JS-taxed stacks. At a business level, this is a disaster, raising the question: “why are new teams buying into stacks that have failed so often before?”

In other words, “why is this market so inefficient?”

George Akerlof’s most famous paper introduced economists to the idea that information asymmetries distort markets and reduce the quality of goods because sellers with more information can pass off low-quality items as more valuable than informed buyers appraise them to be. (PDF, summary)

Customers that can’t assess the quality of products pay the wrong amount for them, creating a disincentive for high-quality products to emerge and working against their success when they do. For many years, this effect has dominated the frontend technology market. Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.

These technologies were initially pitched on the back of “better user experiences”, but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.

The complexity merchants knew their environments weren’t typical, but they sold highly specialised tools as though they were generally appropriate. They understood that most websites lack tight latency budgeting, dedicated performance teams, hawkish management reviews, ship gates to prevent regressions, and end-to-end measurements of critical user journeys. They understood the only way to scale JS-driven frontends are massive investments in controlling complexity, but warned none of their customers.

They also knew that their choices were hard to replicate. Few can afford to build and maintain 3+ versions of the same web app (“desktop”, “mobile”, and “lite”), and vanishingly few scaled sites feature long sessions and login-gated content.

Armed with all of this background and knowledge, they kept the caveats to themselves.

What Did They Know And When Did They Know It? #

This information asymmetry persists; the worst actors still haven’t levelled with their communities about what it takes to operate complex JS stacks at scale. They did not signpost the delicate balance of engineering constraints that allowed their products to adopt this new, slow, and complicated tech. Why? For the same reason used car dealers don’t talk up average monthly repair costs.

The market for lemons depends on customers having less information than those selling shoddy products. Some who hyped these stacks early on were earnestly ignorant, which is forgivable when recognition of error leads to changes in behaviour. But that’s not what the most popular frameworks of the last decade did.

As time passed, and the results continued to underwhelm, an initial lack of clarity was revealed to be intentional omission. These omissions have been material to both users and developers. Extensive evidence of these failures was provided directly to their marketeers, often by me. At some point (certainly by 2017) the omissions veered into intentional prevarication.

Faced with the dawning realisation that this tech mostly made things worse, not better, the JS-industrial-complex pulled an Exxon.

They could have copped to an honest error, admitted that these technologies require vast infrastructure operate; that they are unscalable in the hands of all but the most sophisticated teams. They did the opposite, doubling down, breathlessly announcing vapourware year after year to forestall critical thinking about fundamental design flaws. They also worked behind the scenes to marginalise those who pointed out the disturbing results and extraordinary costs.

Credit where it’s due, the complexity merchants have been incredibly effective in one regard: top-shelf marketing discipline.

Over the last ten years, they have worked overtime to make frontend an evidence-free zone. The hucksters knew that discussions about performance tradeoffs would not end with teams investing more in their technology, so boosterism and misdirection were aggressively substituted for evidence and debate. Like a curtain of Halon descending to put out the fire of engineering debate, they blanketed the discourse with toxic positivity. Those who dared speak up were branded “negative” and “haters”, no matter how much data they lugged in tow.

Sandy Foundations #

It was, of course, bullshit.

Astonishingly, gobsmackingly effective bullshit, but nonsense nonetheless. There was a point to it, though. Playing for time allowed the bullshitters to punt introspection of the always-wrong assumptions they’d built their entire technical ediface on. In time, these misapprehensions would become cursed articles of faith:

  • CPUs get faster every year

    [ narrator: they do not  ]

  • Organisations can manage these complex stacks

    [ narrator: they cannot  ]

All of this was falsified by 2016, but nobody wanted to turn on the house lights while the JS party was in full swing. Not the developers being showered with shiny tools and boffo praise for replacing “legacy” HTML and CSS that performed fine. Not the scoundrels peddling foul JavaScript elixirs and potions. Not the managers that craved a check to write and a rewrite to take credit for in lieu of critical thinking about user needs and market research.

Consider the narrative Crazy Ivans that led to this point.

By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling <a href='/2022/12/performance-baseline-2023/'>inequality-exascerbating technology</a> as a cure-all tonic.
By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling inequality-exascerbating technology as a cure-all tonic.

It’s challenging to summarise a vast discourse over the span of a decade, particularly one as dense with jargon and acronyms as the one that led to today’s status quo of overpriced failure. These are not quotes, but vignettes of distinct epochs in our tortured journey:

  • “SPAs are a better user experience, and managing state is a big problem on the client side. You’ll need a tool to help structure that complexity when rendering on the client side, and our framework works at scale”

    illustrative example  ]

  • “Instead of waiting on the JavaScript that will absolutely deliver a superior SPA experience…someday…why not render on the server as well, so that there’s something for the user to look at while they wait for our awesome and totally scalable JavaScript to collect its thoughts?”

    an intro to “isomorphic javascript”, a.k.a. “Server-Side Rendering”, a.k.a. “SSR”  ]

  • “SPAs are a better experience, but everyone knows you’ll need to do all the work twice because SSR makes that better experience minimally usable. But even with SSR, you might be sending so much JS that things feel bad. So give us credit for a promise of vapourware for delay-loading parts of your JS.”

    impressive stage management  ]

  • “SPAs are a better experience. SSR is vital because SPAs take a long time to start up, and you aren’t using our vapourware to split your code effectively. As a result, the main thread is often locked up, which could be bad?

    Anyway, this is totally your fault and not the predictable result of us failing to advise you about the controls and budgets we found necessary to scale JS in our environment. Regardless, we see that you lock up main threads for seconds when using our slow system, so in a few years we’ll create a parallel scheduler that will break up the work transparently*”

    2017’s beautiful overview of a cursed errand and 2018’s breathless re-brand  ]

  • “The scheduler isn’t ready, but thanks for your patience; here’s a new way to spell your component that introduces new timing issues but doesn’t address the fact that our system is incredibly slow, built for browsers you no longer support, and that CPUs are not getting faster”

    representative pitch  ]

  • “Now that you’re ‘SSR’ing your SPA and have re-spelt all of your components, and given that the scheduler hasn’t fixed things and CPUs haven’t gotten faster, why not skip SPAs and settle for progressive enhancement of sections of a document?”

    “islands”, “server components”, etc.  ]

The Steamed Hams of technology pitches.

Like Chalmers, many teams and managers acquiesce to the contradictions embedded in the stacked rationalisations. Dozens of reasons to look the other way were invented, from the marginal to the imaginary.

But even as the complexity merchant’s well-intentioned victims merchants meekly recite the koans of trickle-down UX — it can work this time, if only we try it hard enough! — the evidence mounts that “modern” web development is, in the main, an expensive failure.

The baroque and insular terminology of the in-group is a clue. It’s functional purpose (outside of signaling) is to obscure furious plate spinning. This tech isn’t working for most adopters, but admitting as much would shrink the market for lemons.

You’d be forgiven for thinking the verbiage was designed obfuscate. Little comfort, then, that folks selling new approaches must now wade through waist-deep jargon excrement to argue for the next increment of complexity.

The most recent turn is as predictable as it is bilious. Today’s most successful complexity merchants have never backed down, never apologised, and never come clean about what they knew about the level of expense involved in keeping SPA-oriented technologies in check. But they expect you’ll follow them down the next dark alley anyway:

An admission against interest.
An admission against interest.

And why not? The industry has been down to clown for so long it’s hard to get in the door if you aren’t wearing a red nose.

The substitution of heroic developer narratives for user success happened imperceptibly. Admitting it was a mistake would embarrass the good and the great alike. Once the lemon sellers embed the data-light idea that improved “Developer Experience” (“DX”) leads to better user outcomes, improving “DX” became and end unto itself. Many who knew better felt forced to play along.

The long lead time for falsifying trickle-down UX was a feature, not a bug; they don’t need you to succeed, only to keep buying.

As marketing goes, the “DX” bait-and-switch is brilliant, but the tech isn’t delivering for anyone but developers. The goal of the complexity merchants is to put your brand on their marketing page and showcase microsite and to make acqui-hiring your failing startup easier.

Denouement #

After more than a decade of JS hot air, the framework-centric pitch is still phrased in speculative terms because there’s no there there. The complexity merchants can’t cop to the fact that management competence and lower complexity — not baroque technology — are determinative of end-user success.

By turns, the simmering embarrassment of a widespread failure of technology-first approaches has created new pressures that have forced the JS colporteurs into a simulated annealing process. In each iteration, they must accept a smaller and smaller rhetorical lane as their sales grow, but the user outcomes fail to improve.

The excuses are running out.

At long last, the journey has culminated with the rollout of Core Web Vitals. It finally provides an effortless, objective quality measurement prospective customers can use to assess frontend architectures. It’s no coincidence the final turn away from the SPA justification has happened just as buyers can see a linkage between the stacks they’ve bought and the monetary outcomes they already value, namely SEO. The objective buyer, circa 2023, will understand heavy JS stacks as a regrettable legacy, one that teams who have hollowed out their HTML and CSS skill bases will pay dearly for in years to come.

No doubt, many folks who now know their web stacks are slow and outdated will do as Akerlof predicts, and work to obfuscate that reality for managers and customers for as long as possible. The market for lemons is, indeed, mostly a resale market, and the excesses of our lost decade will not be flushed from the ecosystem quickly. Beware tools pitching “100 on Lighthouse” without checking the real-world Core Web Vitals results.

Shrinkage #

A subtle aspect of Akerlof’s theory is that markets in which lemons dominate eventually shrink. I’ve warned for years that the mobile web is under threat from within, and the depressing data I’ve cited about users moving to apps and away from terrible web experiences is in complete alignment with the theory.

More prosaically, when websites feel like worse experiences to those who greenlight digital services, why should anyone expect them to spend a lot to build a website? And when websites stop being where most of the information and services are, who will hire web developers?

The lost decade we’ve suffered at the hands of lemon purveyors isn’t just a local product travesty; it’s also an ecosystem-level risk. Forget AI putting web developers out of jobs; JS-heavy web stacks have been shrinking the future market for your services for years.

As Stigliz memorably quipped:

Adam Smith’s invisible hand — the idea that free markets lead to efficiency as if guided by unseen forces — is invisible, at least in part, because it is not there.

But dreams die hard.

I’m already hearing laments from folks who have been responsible citizens of framework-landia lo these many years. Oppressed as they were by the lemon vendors, they worry about babies being throw out with the bathwater, and I empathise. But for the sake of users, and for the new opportunities for the web that will open up when experiences finally improve, I say “chuck those tubs”. Chuck ’em hard, and post the photos of the unrepentant bastards that sold this nonsense behind the cash register.

Anti JavaScript JavaScript Club

We lost a decade to smooth talkers and hollow marketeering; folks who failed the most basic test of intellectual honesty: signposting known unknowns. Instead of engaging honestly with the emerging evidence, they sold lemons and shrunk the market for better solutions. Furiously playing catch-up to stay one step ahead of market rejection, frontend’s anguished, belated return to quality has been hindered at every step by those who would stand to lose if their false premises and hollow promises were to be fully re-evaluated.

Toxic mimicry and recalcitrant ignorance must not be rewarded.

Vendor’s random walk through frontend choices may eventually lead them to be right twice a day, but that’s not a reason to keep following their lead. No, we need to move our attention back to the folks that have been right all along. The people who never gave up on semantic markup, CSS, and progressive enhancement for most sites. The people who, when slinging JS, have treated it as special occasion food. The tools and communities whose culture puts the user ahead of the developer and hold evidence of doing better for users in the highest regard.

It’s not healing, and it won’t be enough to nurse the web back to health, but tossing the Vercels and the Facebooks out of polite conversation is, at least, a start.

from Sidebar https://infrequently.org/2023/02/the-market-for-lemons/

A Field Guide to AI in the Metaverse



By 2030, each of these technologies; AI, XR & Blockchain, will be fully integrated into the Metaverse and each will create massive value for businesses and consumers alike. Learning about and leveraging these new tools will allow the Metaverse to be created not just by programmers, developers, and 3D artists, but by everyone. (keep reading to make your own!)

“With AI in the Metaverse, everyone will be a creator.”

This article will cover Artificial Intelligence exclusively and its importance to the future of the Metaverse.

  1. Generative AI (Text, Audio & Image)
  2. NeRF — 3D spatial capture
  3. Computer Vision & SLAM
  4. Natural Language Processing & Conversational AI
  5. Automatic Content Creation (3D)

AI in the Metaverse holds the power to unleash unlimited creativity while ensuring everyone has equal opportunities. Many will see these technologies as a replacement for human labor, and for some roles, this will certainly be true, but more likely we will adapt to doing much more with much less, which will be required as we enter the exponential age of humanity. With Generative AI, the biggest thing to note is that while the neural networks they use to create novel content are trained on open data sets scraped from the internet. The work they create is not derivative but original. Every piece of content they generate, whether audio, text, video, or images is a novel creation based on billions of training data points scraped from the internet.

“We are entering the exponential age of humanity”

Before you continue this article I want you to understand two things;

  1. AI Changes Everything.
  2. AI is Already Here and it’s not going away.

‘AI has Huge implications in the Metaverse’ — TIME

The Metaverse consists of the collection of media including video, audio, and text that we see in the current iteration of the internet plus three groups of technologies; AI, XR, and Blockchain. If for no other reason than that Ryan Reynolds is already using AI and incredible art like the video above is being made, you should be paying attention.

Generative AI (Text & Image)

Let’s start with the most common and understood; Generative AI Interfaces based on GPT (Generative Pre-Trained Transformer) algorithms, the most well-known being ChatGPT. These generative AI models use massive datasets and scrape the internet for data. Based on simple text input, these AI platforms can create incredibly valuable responses that can be used for:

  • Search: AI-powered insights. Google AI, ChatGPT, OpenAI
  • Text: Summarizing or automating content. GPT3/4, ChatGPT, Open AI
  • Images: Generating images. Midjourney, DALL-E, Stable Diffusion
  • Audio: Summarizing, generating or converting text in audio. Play.ht, Clipchamp, Soundraw
  • Video: Generating or editing videos. Synthesia, VEED.io
  • Code: Generating code: ChatGPT, GitHub Co-Pilot, IntelliCode, PyCharm, Jedi
  • Chatbots: Automating customer service and more. Zendesk, Ada, DeepConverse
  • Natural Language Processing (NLP): InWorldAI, Synthesia, MindMeld
  • Computer Vision: HawkEye, VisoAI, DeepMind, SenseTime
  • Simultaneous Location & Mapping (SLAM): Apple, PTC, Snap, Niantic, Meta
  • Machine Learning (ML): NVIDIA, Microsoft, iTechArt, Meta
  • Suggestion Algorithms: Google, Amazon, Microsoft, Netflix

Let’s do a text one together

STEP 1: Go to chat.openai.com/chat— wait for a free server

STEP 2: Enter Prompt — ‘Write a fun dad joke about AI.’

OUTPUT: Why was the AI feeling cold? Because it left its algorithm open!

STEP 3: Laugh — Either at how dumb the joke is or how amazing how instant the response was, but either way, it truly is amazing.

STEP 4: Try a bunch of work-related tasks you need done asap. (ie. Write an article about…Give 10 examples of….Write a marketing strategy for…)

Let’s make an image using Midjourney

STEP 1: Register at Midjourney

STEP 2: Get on the Midjourney Discord Server

STEP 3: Find a room to submit your query.

STEP 4: Prompt the following — /Imagine Dragon hanging on a castle high resolution photoreal, fire breathing — ar 3:2

NOTE: Imagine is required to start the prompt (not part of the band)

STEP 5: Choose the one you like and click V# you want to see 4 more versions

STEP 6: Choose the best one and click U# to upscale the image

OUTPUT:

Here is a little more reading for a deeper understanding of Who Owns the Generative AI Platform? from a16z and McKinsey’s “What is Generative AI?” You can also read this PBS Special on How AI Turns Texts into Images.

Without getting too philosophical on this subject, generative AI holds the potential to fundamentally change the fabric of society. Imagine when AI not only defends you in court, but also drafts (and passes) laws. AI is already being used by governments to decide who gets welfare and who doesn’t…and many times, it gets it wrong! Imagine your grandmother being denied medical coverage because an algorithm decided she was not worth saving. What other business models and social constructs will be upended? If everyone uses AI to create content, there will be unintended consequences, but the value this nascent technology will create cannot be overstated.

NeRFs (Neural Radiance Fields)

Let’s move to another subset of AI known as NeRFs, not related to the foam missiles you fire at your younger brother, but Neural Radiance Fields, a complex field of study that uses computer vision from a regular RGB camera to capture video and translate it into volumetric 3D renders you can import into 3D platforms and view spatially. NeRFs are not just a better way to turn scans of real-world places into 3D orders of magnitude faster than current LiDAR solutions at a fraction of the cost by using a smartphone camera vs. $50–100K scanner. AI also takes the information and fills in the blanks to create a realistic and believable virtual version of physical space. These virtual models of the real world will help us populate spaces in the Metaverse quickly and easily, making everyone a creator.

These digital replicas of the real world will help us build shared spaces in the metaverse quickly and easily, extending real-life social networks and accelerating mainstream adoption.

For those like me who don’t understand the above diagram, there is a more simplified explanation here: “NeRF or better known as Neural Radiance Fields is a state-of-the-art method that generates novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. The input can be provided as a Blender model or a static set of images. Basically, wave your phone around and voila, you have a 3D volumetric capture (or at least that is the promise).

NVIDIA getting started with NeRFs guide (for advanced programmers)

Note: There are no really easy ways to do this currently, but if you want to go deep, here is a video that explains how to make a NeRF in the easiest way I have found thus far (warning, it’s hard!).

You can also download a program called Polycam 3D for iPhone or Android and start 3D scanning objects and/or scenes for use in platforms such as MetaVRse or Unity.

Computer Vision & SLAM

Computer vision (CV) is the field of computer science that focuses on replicating the complexity of the human visual system and enabling computers to identify and process objects in images and videos in the same way that humans do. Imagine how autonomous cars see and how VR headsets understand what is around you.

Simultaneous Location and Mapping (SLAM) is a form of computer vision that allows your phone to map and understand your surroundings in order to display 3D content in your space. Built into your mobile device are several sensors (Accelerometer, Gyroscope, LiDAR scanner) that, in addition to what the RGB cameras see, provide context in terms of position in the X,Y,Z or 6-Degree of Freedom (6-DOF) space. This allows your phone to understand where the floor is and simultaneously project content into augmented reality.

As CV technology continues to advance, the possibilities will expand from autonomous vehicles, robots, and drones to augmented reality that looks as real as real.

Some of the capabilities of CV and SLAM include object recognition and tracking (think tracking a real-world object while projecting digital information on top of it).

Natural Language Processing & Conversational AI

Natural Language Processing (NLP) is a field of AI that focuses on the interaction between computers and human language. It involves the use of algorithms and statistical models to analyze, understand, and generate human language. NLP is used in a wide range of applications such as language translation, text-to-speech, sentiment analysis, and more.

Conversational AI is a subfield of NLP that focuses on creating human-like interactions between computers and humans using natural language. This can include chatbots, virtual assistants and voice assistants. The goal of conversational AI is to create a seamless and natural communication experience for users. This can be achieved through the use of advanced NLP techniques such as natural language understanding and generation, as well as machine learning and deep learning.

Automatic Content Creation

Nothing says AI like automation. These tools allow you to say what it is you want to create and voila, it is there, in 3D! While there will be a ton of these tools in the Metaverse, this is the first one that we know of that works. This slideshow will give you a much deeper understanding of how this technology will revolutionize gaming. Even music is being created by AI now. Give it a try yourself at Anything World.

Check out this cool 3D object created completely by AI on LumaLabs.

To learn more about new cutting-edge technologies like GET3D from NVIDIA, Make-a-Video from Meta, and DreamFusion from Google, follow Two Minute Papers on YouTube.

As you can see, this is the future and while it is not quite ready for prime time, researchers are using AI to solve for AI so it won’t be long before this becomes how we build every virtual world in the Metaverse.

Generative AI Startup Landscape:

Well, there you have it, a pretty comprehensive look at the artificial intelligence algorithms that will directly impact and hopefully benefit you in the Metaverse.

Alan Smithson is co-founder of MetaVRse.

Header image credit: Midjourney


More from AR Insider…

from AR Insider https://alan-smithson.medium.com/practical-guide-to-ai-in-the-metaverse-583020bbe61f