What’s AR’s Role in Architecture & Engineering?



Though we spend ample time examining consumer-based AR endpoints, greater near-term impact is seen in the enterprise. This takes many forms including brands that use AR to promote products in greater dimension, and industrial enterprises that streamline operations.

These industrial endpoints include visual support in areas like assembly and maintenance. The idea is that AR’s line-of-sight orientation can guide front-line workers. Compared to the “mental mapping” they must do with 2D instructions, visual support makes them more effective.

This effectiveness results from AR-guided speed, accuracy, and safety. These micro efficiencies add up to worthwhile bottom-line impact when deployed at scale. Macro benefits include lessening job strain and closing the “skills gap,” which can preserve institutional knowledge.

But how is this materializing today and who’s realizing enterprise AR benefits? Our research arm ARtillery Intelligence tackled these questions in its report: Enterprise AR: Best Practices & Case Studies, Vol 2. We’ve excerpted it below, featuring Nox Innovations’ AR deployment.

Enterprise AR: Best Practices & Case Studies, Volume II

Positioning and Planning

Our enterprise AR case studies so far have included aerospace, healthcare, and warehousing. But another field that’s aligned with AR’s spatial positioning and planning capabilities is architecture and engineering. This is where Nox Innovations has deployed AR.

Specifically, Nox has applied AR to building information modeling (BIM). This involves large-scale construction projects that use computer modeling before the physical construction process. It allows architects and engineers to anticipate issues virtually (less expensive).

Adding AR to this traditional workflow involves software from VisualLive and Microsoft Hololens. This takes the benefits of BIM modeling a step further by letting on-site engineers overlay models directly on their corresponding real-world positions. This gives them additional perspective.

Moreover, the benefits span engineering and construction phases. By visualizing BIM models in their real-world locations, AR can assist architectural work in earlier phases; and construction in later phases. The alignment between the two can reduce deviation in BIM models.

For example, on-site foremen in construction phases can better envision and execute what architects and engineers planned. Put another way, AR can bridge the gap between engineering and construction. That gap is typically where expensive issues or human error can occur.

“People are shocked because you’re able to see the content that you created right there in real time, in the space that it’s going to be installed,” said Nox Innovations’ project engineer Alexandria Monnin.

The AR Space Race, Part VII: Microsoft

Bottom-Line Impact

Much of the above was put into play in the construction of Microsoft data centers. Here, all the above stakes are elevated because of complex designs such as electrical, mechanical, and piping. Microsoft’s Azure Spatial Anchors were also used for positional accuracy of BIM models.

As background for those unfamiliar, Azure Spatial Anchors are Microsoft’s spatial computing product that enable persistent anchored digital components. In other words, digital content can be affixed to physical structures, where they will stay and appear persistently across user sessions.

Using this software, engineers can lock down digital models to a specific physical point. That is then used as a persistent digital reference for subsequent construction and assembly work. The digital model stays in place, which is a key function in engineering and planning.

By doing all the above, Nox achieved a 21 percent increase in productivity and a 14 percent reduction in fabrication errors in the field. These metrics, again, represent solutions to traditionally expensive problems. And when applied at scale, they can have meaningful bottom-line impact

“Construction rework is expensive,” said Monnin. “With the HoloLens and VisualLive, we’re able to avoid rework because you’re able to go out there on site, verify the model, and then during installation, we can catch up front that something isn’t going to work.”

More from AR Insider…

from AR Insider https://arinsider.co/2022/09/27/whats-ars-role-in-architecture-engineering/

AR-Onboarding Walkthroughs in Mobile Apps


Augmented reality (AR) is a promising new technology that has applications in domains ranging from ecommerce to education and gaming. However, AR also comes with relatively complex interactions as it is trying to realistically combine a virtual scene or artifact with the user’s physical space.

Our Research

To understand the user experience of mobile apps that use augmented reality, we ran a usability-testing study involving 11 participants; 7 sessions were held in person and 4 were remote. This study included a diverse group of applications from areas such as education, health, fitness, art, history, ecommerce, and entertainment. The goal of the study was to see if any progress had been made since our last AR study involving ecommerce apps and to investigate how AR is used in other domains beyond ecommerce.

AR Still Unfamiliar to Most Users

Many people still have no idea what augmented reality is:  most of our participants did not know much about AR or were not at all familiar with the term. When asked “How would you describe the AR technology to someone who might not know anything about it?” eight participants confused AR with virtual reality. For example, one participant stated: “I have seen AR in demos at the store, where you put the head set on, but I don’t have anything like that in my possession. So, it [my experience with AR] has been pretty limited.” Three participants had used AR before once and just one participant had used it twice.

AR Onboarding: Tips and Walkthroughs

While extensive help and tutorials are generally not needed for mobile apps, some onboarding can increase users’ awareness of new, unfamiliar technologies such as AR. AR presents users with novel interaction patterns, many of which are still lacking in both usability and functionality. Hence, many AR experiences will benefit from some form of education or onboarding.

There are two types of instructional content commonly used in mobile apps:

Contextually relevant tips are brief instructions or hints that are related to the user’s goal. They are generally the more effective method of providing help — usually, because users are motivated to pay attention to them. AR-related tips can highlight key parts of the UI, tell people how to interact with the AR object, alert them of additional options, or give them feedback about how to correct certain actions (e.g., during calibration).

Bob’s Furniture: The user received timely contextual tips that highlighted certain functionalities (e.g., using the plus icon to add an object to the room — left or tapping to see more colors of the chair — right). However, some tips (such as Nice choice) were unnecessary and didn’t add any value to the experience but instead distracted the user.

Instructional walkthroughs are generally more complex and teach users about how to interact with the application. While their use in mobile apps is not generally recommended (since most mobile apps are fairly simple and don’t need explanations of the UI), instructional walkthroughs do have a place in AR-based applications due to the novelty of AR and AR-related patterns. Thus, AR walkthroughs can provide step-by-step onboarding and teach users the key components and functionality for completing the task by guiding them through a series of actions in an AR experience.

While both tips and walkthroughs can successfully assist novice users throughout an AR experience, this article focuses on onboarding walkthroughs.

AR-Related Walkthroughs

Walkthroughs are not a substitute for the poor interface design. AR-related instructional walkthroughs should generally have two goals:

  • Increase awareness of the AR feature and show its potential.
  • Guide users through the novel interactions of a simple AR experience so they can learn how to use the AR feature.

If the primary functionality of the app is AR-related, the app may combine both these goals into a single walkthrough that is presented upon launch, If, however, AR is just a supporting feature for your app, don’t flood the user with AR instructions when the app is first launched. Often, users tend to dismiss lengthy or complex walkthroughs that are shown when they first use an app, especially if they consider them irrelevant for their immediate goals. Instead, show an interactive, instructional walkthrough when the AR feature is first used. Awareness of the AR technology can be created through a tip on those pages that include the AR feature.

Kinfolk app’s start screen introduced the purpose of the application and mentioned the AR feature (left). A deck-of-cards walkthrough provided detailed instructions about how to use the AR feature (right). A study participant was glad to receive this information. However, after launching the experience, she had to refer to the help menu multiple times to review the details of the available features and options.

In our study, the most used type of walkthrough was the deck of cards. While this onboarding style provided an overview of the AR features and set users’ expectations for the feature, it often caused cognitive overload and strained the user’s working memory. Participants couldn’t fully remember what they had read and had to refer to the help menu or relaunch the experience to review the deck of cards one more time.

Interactive walkthroughs were generally more effective than static walkthroughs, as they gave users a chance to practice a small-scale AR activity. Participants were generally appreciative when they could see detailed instructions about how to work with the AR feature and felt delighted when they were able to easily complete a first AR task within the app.

One participant using the JigSpace app was pleased with the detailed instructions that guided her through her first AR interaction within the app: “It’s easier than most apps, because it actually gives you  [instructions] at the bottom of the screen that actually tell you what you can do and what you can’t do. Tells you ‘just swipe it,’ ‘spin it’ — pretty straight forward.”

The JigSpace app provided an optional interactive walkthrough that showed the potential of the AR and helped the user practice the app’s key tasks and features.
Civilization AR app: The optional interactive walkthrough (provided upon launching) took the participant through the experience of calibrating the AR feature (left) and interacting with a historical AR artifact (right). It introduced various key functionalities of the app and provided audio instructions and feedback in addition to written text. Despite the positive outcome of this experience, the instructions were hard to read due to the poor contrast with the background.

AR-Walkthrough Components

In addition to highlighting the core functionality of the UI, being aware of some of the main challenges that users face in their first interactions with AR features can help app designers set the stage for a successful AR experience. In particular, an AR walkthrough should be brief and touch upon the following key considerations for getting started with the AR feature:

  • What to expect from the AR experience
  • How to handle the device
  • How to prep the environment.

What to Expect from the AR Experience?

When landing on the homepage of an app, many users had no idea about how AR was used. Sharing an overview of the AR experience and its goal clearly and in a descriptive way will help the users prepare for the next steps.

Focus first on the high-level purpose of the AR feature (e.g., see how a piece of furniture will look in your room, create interactive 3D presentations, drive a Mars Rover in your own environment) and, once that is clear, present users with low-level instructions such as how to use the specific controls in the interface.

The JigSpace app displayed an overview of the application’s functionality, followed by an optional interactive walkthrough that provided additional information about the key features of the app.

A participant trying a game in the ARLOOPA app could not understand the goal of the game and what she should expect from the AR experience. She stated: “I don’t know how to play it. I’m just going to try a different game. I did not understand that one. […] So, I push Help because I don’t know how to do it.”

Unfortunately, using the Help menu did not resolve the issue, as it provided low-granularity information about interacting with the AR object and did not mention the goal of the game. The participant said: I don’t understand how to play this game at all. This is where I get frustrated.” She added: “It would have been helpful to get a better description of ideas or directions, I guess. This seems really weird. Even though I don’t understand how to play the game, I am inside the game because it’s so zoomed in, but it doesn’t help because you don’t even know what you’re doing. So, this needs to be described better.”

A participant playing a game in the ARLOOPA app could not understand the goal of the game (left); when she accessed the help page (right), the information provided had too low granularity and did not help her.

The Mission to Mars app showed an initial walkthrough with some basic instructions about interacting with the AR object. However, the objective of the AR experience was unclear. Hence, the participant later struggled to understand the hints and tips as she was using the app. Clarifying the purpose of the AR experience ahead of time could have helped her see the bigger picture and resolved potential confusion. She said:  “I don’t know what ‘follow the arrow to the nearest circle’ means. It seems like, it’s supposed to be a game. Cause it has a green thing that says like health, which I associate with game and staying alive. […] Oh, what’s that? Oh, I can drive it. […] So, there’s really no direction as to what I’m doing. I didn’t know I could drive it, but I sort of figured it out with this white circle thing. I’m backing it up. I don’t know. It seems sort of awkward, but actually it’s cool. […] I didn’t know what the Mars Rover looked like. Oh, I guess I was supposed to navigate this guy to this green area.”

The walkthrough in the Mission to Mars app included basic instructions about interacting with the AR object, but did not clearly state the purpose of the app. (If you carefully read the text in the left screenshot, you will discover it, but most users won’t patiently read lengthy descriptions.) Moreover, game-specific signifiers (like the Health meter in the right screenshot) might not be familiar to users with little gaming experience.

How to Hold the Mobile Device

Due to the participants’ unfamiliarity with the AR technology, they often were unaware that the device’s camera would be used to take advantage of the AR feature. For instance, upon launching BBC’s Civilizations AR app, a participant was confused when she received instructions in text format but could not see any other visuals on the screen. Initially, she assumed that the AR feature had failed; it took her a couple of minutes to realize that the problem occurred because she had placed the phone on the table and the desk was blocking the camera view. She said: “Well, ‘cause I had my phone on the table, the camera was on already. So, it was already going. I didn’t really realize that. So, I thought it was just black and I was waiting for the tutorial to tell me what to do. So now I want to go back in the tutorial and find out what it was telling me. I don’t want to skip the tutorial.”

One participant browsing BBC’s Civilizations AR app had her phone on the table during the walkthrough and did not realize that the AR experience relied on her phone’s camera. After accidentally picking up her phone, she understood what the problem was.

Let users know how they should hold their phone: whether they should keep it in their hands, at a specific height, angle, or orientation (portrait or landscape). Inform users that the camera should not be covered and should be directed towards an empty area. This information will prevent mistakes such as placing the phone on the table or leaning it against the wall.

The Home Court basketball-training app didn’t tell the user that it was supposed to be used in landscape mode. Hence, our participant had to relaunch the experience after realizing that the landscape direction is the preferred option for the task.

Furthermore, the handling of the device should take into account physical limitations and should not conflict with accomplishing the primary task. For instance, participants struggled with drawing in the SketchAR app. The app “projected” an image onto a piece of paper and the user could trace the lines in that image to make their own drawing. To draw the selected image, the user had to hold the phone very still with one hand above the paper and draw on the paper with the other hand, while looking at the screen so they could replicate the image on the paper. Moving the phone could cause the user to lose their work.

A participant who attempted this task commented: “I have to hold the phone […] in place without moving. So, in this case it seems pretty difficult to keep it up, [it] keeps losing the paper. […]. [It’s frustrating] that it keep losing the paper, but, more than that, it is that I’m holding my phone with my left hand, looking at it without trying to move it. Because I feel like if I move it, then it’s going to lose track of the paper. […] I’ve lost it. […] Is it because the table is white as well and it’s losing it or is it because I’m just moving around so much?”

The Sketch AR app superimposed an image of a dog on a physical piece of paper (left). The user was supposed to trace the dog on the paper while keeping the phone’s camera still in the other hand. Our study participant kept losing the image and their progress (right).

How to Prep the Environment

Preparing the physical environment can be another hurdle that users need to overcome as they interact with an AR feature. Users should receive timely and contextually relevant step-by-step instructions regarding requirements such as lighting, surface quality, and space size. For example, some applications will perform better when viewed outdoors or in a particular location.

One participant who was using the Smartify: Museum and Art Guide app received no clear instructions regarding the preferred size of the space where she should place an AR artifact; hence she had difficulty fitting and adjusting it in her room. She said, “So, it should have told me, you know, ‘the object is going to be three-by-three-feet size;’ then, I would have scanned or chosen the location accordingly so that there wouldn’t have been this confusion of, you know, ‘more light, move your phone, and all that.’ […]  If they would tell me ‘this particular art item would need this much space. So, scan accordingly’ — that would be helpful.”

In the Smartify: Museum and Art Guide app, the participant had to scan a space to position an AR artifact in it, but the app didn’t specify the required size of the area. The participant struggled to find the right spot for launching the AR experience.

The Augmented Berlin app did give some instructions regarding the requirements of AR experience, but the instructions were not detailed enough. A study participant said, “It was good that it gave me a tip like that it would require more space and lighting, but, you know, it didn’t tell me how much. More — could be any amount. So having the exact space that it would require, like saying it would require a 10-by-10-feet wall, […] would have been helpful for me to choose the right space, because now I can hear the audio and the audio says ‘here, it happened something, something happened here’ [but I don’t see anything].”

The Augmented Berlin app did give some instructions regarding the requirements of AR experience, but the instructions were not detailed enough.

Additionally, the instructions should not be overwhelming or too complicated. The Best Buy app asked the user to attach four stickers to the environment to calibrate the AR feature. One study participant felt it was too much and skipped this part: “It’s getting difficult for me to, you know, project it because it needs particular things, but I think I [will] just try to place and I tap it. […] It says your TV was placed but I can’t see the TV. Yeah, [..]’ it’s a little difficult, it’s not very […]’’ user friendly. It’s asking so many specifications to be, like, perfect.” 

The Best Buy app asked the user to go through additional steps to complete the calibration process. However, the participant considered the particular requirements overwhelming.

Many users won’t bother to go through a complicated process to view an item in the room, especially if their primary goal is not interaction with the AR object. In such cases, the AR feature is useful only if it adds extra value with minimum effort.

Another critical aspect of the AR experience is users’ safety. Many AR interactions involve moving around the physical environment while looking at the phone screen. Hence, users might disregard their environment or miss potential hazards such as slippery surfaces, sharp objects, or obstacles. Tell users to find a safe place away from all these potential risks before beginning the AR experience.

The Mission to Mars application provided a brief, optional instructional onboarding, including examples of environmental requirements, such as appropriate surface (top right), lighting requirements (bottom left), and camera handling (bottom right). These instructions, combined with visual examples, were helpful in guiding users throughout the process. Additionally, the tutorial warned the users about potential safety hazards and ensured the users’ safety before starting the experience (top left). 

Conclusion

AR is an exciting new technology that has plenty of potential. Many of our study participants were delighted when they were able to go through an experience successfully. For example, after using the MauAR — Berlin Wall app, one participant said, “I think it’s cool! I will probably hang on to this [app] after this [session]! I would probably like [to] run through it and finish it out. I’ve never seen anything like it! […] It’s like an education application that is a little more immersive…”

However, AR-related patterns (as well as the meaning of AR) are unfamiliar to most users. To help people take advantage of AR in your apps, consider developing contextual tips and instructional walkthroughs that set the stage and help users prepare for the interaction.

Make sure that your walkthroughs describe the purpose of your app and how the AR functionality fits in. Then clearly explain to users how they are supposed to handle their device throughout their experience and how their space should be chosen for an optimal, safe interaction.

from NN/g latest articles and announcements https://www.nngroup.com/articles/ar-walkthroughs/

How to Authenticate a User with Face Recognition in React.js


With the advent of Web 2.0, authenticating users became a crucial task for developers.

Before Web 2.0, website visitors could only view the content of a web page – there was no interaction. This era of the internet was called Web 1.0.

But after Web 2.0, people gained the ability to post their own content on a website. And then content moderation became a never-ending task for website owners.

To reduce spam on these websites, developers introduced user authentication systems. Now website moderators can easily know the source of spam and can prevent those spammers from accessing the website further.

If you are want to know how to implement content moderation on your website, you can read my article on How to detect and blur faces in your web applications.

Now let’s see what we’ll be getting into in this tutorial.

What You’ll Learn in This Tutorial

In this tutorial, we will discuss different authentication techniques you can use to authenticate users. These include email-password authentication, phone auth, OAuth, passwordless magic links, and at last facial authentication.

Our primary focus will be on authentication via face recognition techniques in this article.

We’ll also build a project that teaches you how to integrate facial recognition-based authentication in your React web application.

In this project, we’ll use the FaceIO SaaS (software as a service) platform to integrate facial recognition-based authentication. So, make sure you set up a free FaceIO account to follow along.

And finally, we’ll take a look at the user privacy aspect and discuss how face recognition doesn’t harm your privacy. We’ll also talk about whether it’s a reliable choice for developers in the future.

This article is packed with information, hands-on projects, and discussions. Grab a cup of coffee and a slice of pizza 🍕 and let’s get started.

The final version of this project looks like this. Looks interesting? Let’s do it then.

faceIO-final

Different Types of User Authentication Systems

There are many user authentication systems out there right now that you can choose to implement in your websites. There are no real superior or inferior auth techniques. All of these auth systems depend on using the right tool for the job.

For example, if you are making a simple landing page to collect emails from users, there is no need to use OAuth. But if you are building a social platform, then using OAuth makes more sense than traditional authentication. You can pull the user’s details and profile images directly from OAuth.

If your web application is built around any investment-related content or legally binding services, then using phone auth makes more sense. A user can create unlimited email accounts but they’ll have limited phone numbers to use.

Let’s take a look at some popular authentication systems so we can see their pros and cons.

Email-password based authentication

Email-password-based authentication is the oldest technique for verifying a user. The implementation is also very simple and easy to use.

The pro of this system is you don’t need to have a third-party account to log in. If you have an email, whether it is self-hosted or from a service (like Gmail, Outlook, and so on), you are good to go.

The primary con of this system is you need to remember all of your passwords. As the number of websites is constantly growing and we need to log in to most sites to access our profiles, remembering passwords for every site becomes a daunting task for us humans.

Coming up with a unique and strong password is also a huge task. Our brains aren’t typically capable of memorizing many random strings of letters and numbers. This is the biggest drawback of email-password-based authentication systems.

Phone authentication

Phone authentication is generally a very reliable auth technique to verify a user’s identity. As a user typically doesn’t have more than one phone number, this can be best suited for assets-related websites where user identity is very important.

But the drawback of this system is people don’t want to reveal their phone numbers if they don’t trust you. A phone number is much more personal than an email.

One more important factor of phone authentication is its cost. The cost of sending a text message to a user with an OTP is high compared to email. So website owners and developers often prefer to stick with email auth.

OAuth-based authentication

OAuth is a relatively new technique compared to the previous two. In this technique, OAuth providers user authentication and useful information on behalf of the user.

For example, if the user has an account with Google (for example), they can log in to other sites directly using their Google account. The website gets the user details details from Google itself. This means that there’s no need to create multiple accounts and remember every password for those accounts.

The major drawback of this system is that you as a developer have to trust the OAuth providers and many people don’t want to link all their accounts for privacy reasons. So you’ll often see an email-password field in addition to OAuth on most websites.

Magic links solve most of the problems you face in email password-based authentication. Here you have to provide only your password and you will receive an email with an auth link. Then you have to open this link in your browser and you are done. No need to remember any passwords.

This type of authentication has gained in popularity these days. It saves a lot of time for the user, and it’s also very cheap. And you don’t have to trust a 3rd-party like in the case of OAuth.

Facial recognition authentication

Facial recognition is one of the latest authentication techniques, and many developers are adopting it these days. Facial recognition reduces the hassle of entering your email-password or any other user credentials to log in to a web application.

The most important thing is that this authentication system is fast and doesn’t need any special hardware. You just need a webcam, which almost all devices have nowadays.

Facial recognition technology uses artificial intelligence to map out the unique facial details of a user and store them as a hash (some random numbers and text with no meaning) to reduce privacy-related issues.

Building and deploying an artificial intelligence-based face recognition model from scratch is not easy and can be very costly for indie developers and small startups. So you can use SaaS platforms to do all this heavy-lifting for you. FaceIO and AWS recognition are examples of these type of services you can use in your projects.

In this hands-on project, we are going to use FaceIO APIs to authenticate a user via facial recognition in a React web application. FaceIO gives you an easy way to integrate the authentication system with their fio.js JavaScript library.

Project Setup

Before starting, make sure to create a FaceIO account and create a new project. Save the public ID of your FaceIO project. We need this ID later in our project.

To make a React.js project, we will use Vite. To start a Vite project, navigate to your desired folder and execute the following command:

npm create vite@latest

Then follow the instructions and create a React app using Vite. Navigate inside the folder and run npm insall to install all the dependencies for your project.

Screenshot-from-2022-07-27-10-46-05

After following all these steps, your project structure should look like this:

.
├── index.html
├── package.json
├── package-lock.json
├── public
│   └── vite.svg
├── src
│   ├── App.css
│   ├── App.jsx
│   ├── assets
│   │   └── react.svg
│   └── main.jsx
└── vite.config.js

How to Integrate FaceIO into Our React Rroject

To integrate FaceIO into our project, we need to add their CDN in the index.html file. Open the index.html file and add the faceIO CDN before the root component. To learn more, check out FaceIO’s integration guide.

<body>    
    <script src="https://cdn.faceio.net/fio.js"></script>
    <div id="root"></div>
    <script type="module" src="/src/main.jsx"></script>
</body>

Now remove all the code from the App.jsx file to start from scratch. I’ve kept this tutorial as minimal as possible. So I’ve only added a heading and two buttons in the UI to demonstrate how the FaceIO facial authentication process works.

Here, one button works as a sign-in button, and the other one works as a log-in button.

The code inside the App.jsx file looks like this:

import "./App.css";
function App() {
  return (
    <section>
      <h1>Face Authentication by FaceIO</h1>
      <button>Sign-in</button>
      <button>Log-in</button>
    </section>
  );
}

export default App;

How to Register a User’s Face using FaceIO

Working with FaceIO is very fast and easy. As we are using the fio.js library, we have to execute only one helper function to authenticate a user. This fio.js library will do most of the work for us.

To register a user, we initialize our FaceIO object inside a useEffect hook. Otherwise, every time a state changes, it re-runs the components and reinitializes the faceIO object.

let faceio;
useEffect(() => {
    faceio = new faceIO("Your Public ID goes here");
}, []);

Your FaceIO public ID is located on your FaceIO console. Copy the public ID and paste it here to initialize your FaceIO object.

Now, define a function named handleSignIn(). This function contains our user registration logic.

Inside the function call the enroll method of the faceIO object. This enroll method is equivalent to the sign-up function in a standard password backed registration system and accepts a payload argument. You can add any user-specific information (for example their name or email address) to this payload.

This payload information will be stored along with the facial authentication data for future reference. To learn about other optional arguments, check out their API docs.

In our sign-in button, on user click we invoke this handleSignIn() function. The code snippets for user sign-in look like this:

const handleSignIn = async () => {
    try {
      let response = await faceio.enroll({
        locale: "auto",
        payload: {
          email: "example@gmail.com",
          pin: "12345",
        },
      });

      console.log(` Unique Facial ID: ${response.facialId}
      Enrollment Date: ${response.timestamp}
      Gender: ${response.details.gender}
      Age Approximation: ${response.details.age}`);
    } catch (error) {
      console.log(error);
    }
  };

<button onClick={handleSignIn}>Sign-in</button>
faceIO-1
FaceIO screen

How to Sign In using Face Recognition

After registering the user, then you’ll need to get the user into the authentication or log-in/sign-in flow. Using the fio.js library also makes it very easy for us to set up a log-in flow using face authentication.

We have to invoke the authenticate method of the faceIO object which is equivalent to the sign-in function in a standard password backed registration system and all the critical work will be done by the fio.js package.

At first, define a new function named handleLogIn() to handle all the log-in logic in our React app. Inside this function, we invoke the authenticate method of the faceIO object as I mentioned earlier.

This method accepts a locale argument. This is the default language of the interaction of users with FaceIO widgets. If you are not sure, you can assign auto in this field.

The authenticate method also take more optional arguments like permissionTimeout, idleTimeout, replyTimeout and so on. You can check out their API documentation to know more about optional arguments.

We invoke this handleLogIn() function when someone clicks on the Log-in button:

const handleLogIn = async () => {
    try {
      let response = await faceio.authenticate({
        locale: "auto",
      });

      console.log(` Unique Facial ID: ${response.facialId}
          PayLoad: ${response.payload}
          `);
    } catch (error) {
      console.log(error);
    }
  };

<button onClick={handleLogIn}>Log-in</button>

Our user authentication project using FaceIO and React is now complete! You learned how to register and login a user. You can see the process is fairly simple compared to implementing an email-password based or some other authentication method we discussed earlier in this article.

Now you can style all the jsx elements using CSS. I didn’t add CSS here to reduce complexity in this project. If you are curious, you can take a look at my GitHub gist.

If you want to host this React FaceIO project for free, you can check out this article on how to deploy your React and Nextjs project in Cloudflare pages.

How to Use the FaceIO REST API

Besides providing widgets via the fio.js library, FaceIO also provides REST APIs to streamline the authentication process.

Every application in the FaceIO console has an API key. You can use this API key to access the FaceIO REST API endpoints. The base URL for the REST API is https://api.faceio.net/.

The URL schema accepts URL parameters like this https://api.faceio.net/cmd?param=val&param2=val2. Here cmd is an API endpoint and param is an endpoint parameter if it accepts any.

Using the REST API endpoints, you can automate various tasks in your backend.

  1. You can delete a face ID on a user’s request.
  2. You can attach a payload with a face ID.
  3. You can change the PIN associated with a face ID.

This REST API is intended to be used purely on the server side. Make sure you don’t expose it to clients. It’s important that you read the following Privacy and Security sections to learn more about this.

How to Use FaceIO WebHooks

Webhooks are event-driven communication systems among servers. You can use this webhook feature of FaceIO to update and sync your backend with new events happening in your front-end web application.

The event of this webhook fires on new user enrollment, facial authentication success, facial ID deletion, and so on.

You can set up FaceIO webhooks in your project console. A typical webhook call from FaceIO lasts for 6 seconds. This contains all the information about a specific event in a JSON format and looks like this:

{
  "eventName":"String - Event Name",
  "facialId": "String - Unique Facial ID of the Target User",
  "appId":    "String - Application Public ID",
  "clientIp": "String - Public IP Address",
  "details": {
     "timestamp": "Optional String - Event Timestamp",
     "gender":    "Optional String - Gender of the Enrolled User",
     "age":       "Optional String - Age of the Enrolled User"
   }
}

Privacy and FaceIO

Privacy is the most important thing for a user nowadays. As big corporations use your data for their good, questions arise on whether the privacy of these face recognition techniques is valid and legitimate.

FaceIO as a service follows all the privacy guidelines and gets user consent before requesting their camera access. Even if the developer wanted, FaceIO doesn’t scan faces without getting consent. Users can easily opt-out of the system and can delete their facial data from the server.

FaceIO is CCP and GDPR compliant. As a developer, you can release this facial authentication system anywhere in the world without facing privacy issues. You can read this article to know more about FaceIO privacy best practices.

FaceIO Security

The security of a web application is an important topic to discuss and consider. As a developer,  you are responsible for the security of a site or application’s users.

FaceIO follows some important and serious security guidelines for user data protection. FaceIO hashes all the unique facial data of the user along with the payload we specified earlier. So the stored information is nothing but some random strings which can’t be reverse engineered.

FaceIO outlines some very important security guidelines for developers. Their security guide focuses on adding a strong pin code to protect user data. FaceIO also rejects covered faces so that no one can impersonate someone else.

Conclusion

If you’ve read this far, thank you for your time and effort. Make sure to follow along with the hands-on tutorial so you can fully grasp the topic.

The project should be approachable if you follow all the steps. If you make something out of it, show me on Twitter. If you have any questions, please ask. I will happy to help you. Till then, have a good day.

from freeCodeCamp https://www.freecodecamp.org/news/authenticate-with-face-recognition-reactjs/

The Principles and Laws of UX Design

Five prominent rings.

Blue, yellow, black, green, and red.

It’s one of the most recognizable symbols globally – a hallmark of good design. Yet, designing an Olympic logo isn’t a walk in the park.

Striking a delicate balance between the host city and the revelry of the games is a tough act – although not unachievable. The logo of the 1964 Tokyo Games, designed by Yusaku Kamekura and Masaru Katsumi, is a stellar example of timeless design.

Why did the logo work?

Among other reasons, it embodied two crucial commandments in Dieter Ram’s principles (we’ll come back to this) of design: (a) Good design is long-lasting, and (b) Good design is as little design as possible.

Who doesn’t know the “land of the rising sun”?

As much as it captured the very essence of Tokyo, it also celebrated the spirit of sport.

Tokyo Olympic 1964

Tokyo Olympic 1964

Across the world, the Allianz Arena in München, Germany, can accommodate 75,000 spectators. But, that’s not the only thing that’s impressive about it. Host to the opening ceremonies of the 2006 World Cup, it’s considered one of the best architectural structures. The stadium’s design emphasizes the procession-like walk of the fans toward the stadium.

Although a crater-like shape, the stairs on the outside lead to a great slope to the approach, so it looks like a swarm of ants making their way home from an aerial view. Thousands of fans walk shoulder to shoulder, the adrenaline rush is high, and there’s solidarity in the air. The exterior of the stadium also changes color. Each aspect of the stadium is a masterclass in innovative design.

This is to say that all designs must serve a purpose.

But, before we get there, let’s go back to the roots of designing to what UX design has become now. The objective has always been the same – to create a user-friendly experience.

It is the base of all design, whether in art, architecture, or digital spaces.

A Brief History of Design

According to an article published by Career Foundry, we can travel back to 6000 B.C. to start our journey in design. With the concept of Feng Shui implemented in living spaces, the idea was to move objects around to make life harmonious and optimal. Choosing the right colors too is an intrinsic part of Feng Shui as it affects a person’s mood.

Not too different from designing any user interface, is it?

By 500 B.C., alphabets had taken concrete shape – a milestone in designing and a breakthrough in communication. Modern-day design, efficiency, and the purpose of design as we see it now perhaps started with Toyota. It put the people and the workers in the forefront, encouraging a healthy lifestyle, a decent pay — actively incorporating suggestions and feedback.

They placed their employees at the heart – a critical step in defining user experience.

Had UX design finally seen the light of day? Perhaps, it did.

Cut to – the 70s – to Apple.

Xerox’s PARC research center deserves a special mention here though. The mouse and the graphical interface were boons that the center bestowed on the world and set the path for future personal computing that we’ve come to accept as necessities today.

Before the world relied on Siri or got used to the “Marimba” ringtone, Macintosh released Apple’s first PC with a graphical user interface, a built-in screen, and a mouse. Then, in 2001, teenagers found the only way to stay “cool” by playing around with the iPod click-wheel, till they landed on The Calling’s “Wherever You Will Go.”

It was a time of great UI, even better UX, and incredible music.

In 1995, Donald Norman, a cognitive scientist at Apple, coined “User Experience. At Apple, he worked as a User Experience Architect, the first there ever was.

In 2022, the term has evolved into so much more than just what looks good.

It’s a shape-shifting phenomenon that looks different every day. The focus now is on personalized and localized user experience with a heavy dose of augmented reality, artificial intelligence, data visualization, 3D elements, and responsive designs.

Now, let’s get to the meat.

Principles of UI/UX Design

The Pareto Principle

Pareto Principle

Ever heard of the 80/20 rule — eat 80% of the pie and leave the rest for the spouse – no, unfortunately, not that one.

The principle states that 80% of the effects of any process result from 20% of the effort that has gone into it. However, you might want to view it slightly differently in UX design. Suppose the 80% are your users and 20% are your features.

Bottom line – simplify interfaces. Get rid of the frills. Remove buttons or features that don’t contribute to the outcome.

The Gestalt Principle

The Gestalt Principle or Gestalt psychology are laws of human perception that describe how humans group similar elements, recognize patterns and simplify complex images when we perceive objects.”

For designers, it’s crucial to understand and implement this principle to organize the layout of any interface and make it aesthetically pleasing.

Six common laws fall under the Gestalt Principle:

  •  Closure (Reification)

Gestalt Principle

The human mind is wired to complete spaces in perceived incomplete shapes. Hence, we automatically fill in gaps between elements, so that the mind can accept them as one singular or whole entity.

Designers rely heavily on this law to create logos with negative spaces or make negative spaces look not as barren.

  • Common region

Law of common region

The human mind also groups elements that fall in the same closed region. To put this law to use, designers deliberately place related objects in the same closed area to show the difference from another set of closed areas.

An excellent way to create a common region is by placing elements inside a border.

  • Continuation

Law of common region 2

Whether with lines, curves, or a sequence of shapes, our eyes tend to follow a natural path. A break in these elements might be jarring – a key learning for a designer. It may immediately drive a user away. Continuation also affects positive and negative spaces in design.

The objective is to create a flow that is easy to navigate.

When designing an E-commerce website, ensure that navigation follows a linear path. In the example given below, one can quickly categorize and differentiate between primary and secondary navigation. Home, profile, shop, contact and help promptly stand out as one group while men, women, and kids are another.

  • Figure/Ground (Multi-stability)

Figure/Ground (Multi-stability)

What do you see first? A vase or two faces?

What’s happening here is called the principle of multi-stability. The image can be interpreted in two ways, but our mind can only interpret one view in one go. Because it’s out of our conscious control, we can’t predict what and who will see the vase first or the two faces.

When posed with a dilemma like this one, our mind is quick to fight uncertainty and look for solid and stable items. But, in most cases, unless an image is truly abstract, the foreground catches our eye first.

In UX design, this principle is used in navigation panels, modals, and dialogs.

  • Proximity (Emergence)

Law of Proximity

It’s the spatial relationship between elements inside a perceived or actual frame. To follow this rule, place things related close to each other and unrelated things further from each other.

Peace Innovation

You can also apply the same rule in the context of text. Sentences should be grouped in paragraphs and separated below and above by whitespace. Whitespaces around headings demarcate the beginning of a new topic or paragraph.

Clingy Cat Solution

  • Similarity (Invariance)

Similarity (Invariance)

The invariance principle states that our brain finds similarities and differences in everything. This is why it’s easy to make something the center of attention in a crowd of similar objects. Imagine a wall full of black squares in different sizes and one solitary red square. Without realizing it, you created two groups in your head.

The fields and the button are the same sizes in the image below. However, the button is of a different color – this immediately prompts us to perform a specific function. We intuitively knew that the blue texts are links in the description text.

Log In Panel

Understanding design principles provides designers with a good head start on their journey. But, there are 10 commandments of design by Dieter Rams that a designer must follow:

Dieter Ram’s 10 Commandments for Good Design

Good design is innovative
Developments in technology go hand-in-hand with those of UI and UX design – they supplement each other. As a result, there is always room for innovative design with new offerings in technology, especially when designing for the masses. However, innovative design doesn’t have to rely on technology alone. It can also benefit from shifting trends in user behavior.

Good design makes a product useful

The sole purpose of designing is to serve a practical purpose. When a design meets functional, psychological, and aesthetic criteria, it emphasizes the usefulness of a product.

Good design is aesthetic 

Human beings are visual creatures and have relied on visual cues since the beginning of time to find food, shelter, mates, and the like. So, when designing a product, the aesthetic quality is integral to its usefulness and success.

Good design makes a product understandable

If you must explain a product and what it does; consider the battle lost. Good design describes the product’s structure as it is carefully laid out in the product itself. It should be self-explanatory and intuitive.

Good design is unobtrusive

In UX design, products will rarely take up ample physical space. Yet, good UX design seamlessly finds itself incorporated into our daily life. The design should be neutral and feel personalized.

Good design is honest

If your design attempts to manipulate the consumer – you should go back to the drawing board and start afresh. Good UI design has nothing to hide; it’s transparent.

Good design is long-lasting

Good design doesn’t attempt to be fashionable; it stays classic and never appears antiquated. Instead, it stands out as fresh even in a constantly changing world.

Good design is thorough down to the last detail

When designing a product, a designer must put himself in the user’s shoes. Starting a project by forcing a solution is not the way to go. Instead, focus on all the pain points and leave nothing out. Practice care and accuracy at every step of the design process.

Good design is environmentally-friendly

What can you do as a UI designer to make your designs more earth-friendly? For starters, you can choose an eco-friendly web host, power your website with a green energy source, and create simple designs. All of which will help reduce the carbon emissions of your website.

Good design is as little design as possible

Always strip down to the basics and keep what is crucial. The more the clutter, the more confused the user will be. Focus on reducing elements and buttons as it will help you concentrate on essential aspects and things that matter.

Getting the hang of it? There’s just one last thing we’ll cover now. Pay attention – it’s important.

UX Laws Every Designer Should Know About

Hick’s law

Hick’s law

Hick’s law states that the more the choices – the more the user is spoilt for it. This directly increases the decision-making time as they are burdened with the complexity of options. To incorporate Hick’s law into your design, break complex tasks into smaller steps and minimize choice when response times are critical.

Sometimes, the user needs a little help. Highlight options as recommendations to help ease their user journey. However, be careful of what you’re subtracting or removing – you may miss out on crucial elements.

Fitts’s law

Fitts’s law

Fitt’s law simplifies the process for users even more. Think of it this way – the user wants to hit a bull eye at one shot, but the only difference is that the center of the target shouldn’t be a small red dot. It should be as large as possible.

Touch targets should be large enough so that users can accurately select them. Ensure that there is enough space between the touch target and other buttons so that movements are quick, deliberate, and precise.

Miller’s law

Miller’s law

On average, Miller’s law states that a person can only retain seven items in their working memory. Suppose you are designing a navigation page – bombard the user with more than seven elements and chances are that they would most likely not recall the location they had arrived from.

This is often why services or products with several options are grouped to reduce the memory load.

Jakob’s Law

Jakob’s Law

Jakob’s law states that users will often project expectations of other sites on yours. If they prefer a website for any reason, they will enjoy spending time on it. When they hop onto your site, they will expect a similar sense of aesthetic and satisfaction their preferred site offers.

While it may seem counter-intuitive, it may be a good idea to hover around benchmarks already set and not try to create something overtly unique.

Even when armed with all the knowledge in the world, mistakes are bound to happen. When designing for UX, designers often make the following mistakes. With everything we’ve learned, let’s figure out how we can avoid them.

UX Design Mistakes to Avoid

Inconsistencies

Inconsistency is a major turn-off for all, whether in life or UX designing. For instance, while using straight lines as dividers for icons, elements, or segments, ensure that the lines are thick or thin. If you’ve settled on a font, incorporate fonts of the same family throughout the product. When each element within your design creates what appears to be a consistent pattern, inconsistency breaks the pattern. The anomaly stands out in a jarring way.

Blurred lines between primary and secondary buttons

Not demarcating primary and secondary buttons is a good way to annoy a user – the biggest sin a designer can commit. Primary and secondary buttons exist as they serve a specific purpose. Highlight primary buttons in a strong color and add more visual weight to them.

Lack in text hierarchy

A lack in text hierarchy can also instantly break your design. Think of study notes you made in school while cramming for an exam. You capitalized the main topic, wrote over it to make it appear bold, and even drowned it in some fluorescent yellow highlighter. The important bits followed as sub-headings and then bullet points. A clear ranking of the most critical information to the least stood out most effectively. Apply similar practices for UX design and ensure that you let your text breathe with adequate spacing.

Not focusing on icons

Bad iconography can make a potentially successful design or product one that will instantly be forgotten. Why are icons important in UX design? Users recognize them instantly, and it helps them navigate better. Most importantly – icons save space. The purpose of an icon is to communicate a concept quickly. Hence, it’s best to stick to figures and images that resonate with the action it prompts. Line style, hand-drawn, and multi-color icons are all the rage in 2022.

Low-quality images

We’re in 2022 – visuals are everything. There is no excuse for you to settle for low-quality images. Your user most definitely won’t. While you’re at it, look for images that speak about your service or product and find high-quality images only. Staged and fake photos may land you in a hot mess, so look for realistic and creative photos.

Now, to stay abreast – let’s equip you with some UX trends for 2022.

2022 UX Trends to Keep an Eye On

  1. Simplicity wins. If there’s one thing you must learn from Dieter Rams – it’s simplicity. Whether we’re in 2022 or 3022 – one thing will remain constant. Simplicity; it will never run out of fashion. So, when designing a product, your sole aim shouldn’t be to chase everything that’s transforming around you. Start with the basics and come back to the basics.
  2. Delicate serifs will continue to reign, but now is an excellent time to experiment with typography. Go bold and go big. Keep in mind that it may appear boxy. Likewise, 70s-inspired disco fonts are making quite the comeback.
  3. Characterized by blurred backgrounds, Glassmorphism creates a frosted glass effect. To create this effect, place light or dark elements on colorful multi-layer backgrounds. As you add another layer of a blurry effect to the background of the elements, it appears as though it’s morphed into frosted glass.
  4. Were you aware that 22% of internet users buy groceries using voice assistants? If you didn’t, now is a good time to incorporate voice user interfaces in your design or product.
  5. Diversity and inclusivity shouldn’t just be buzzwords anymore. When designing a product, you must also think about how accessible it is for every audience member – including those with limited abilities. An all-inclusive design is the need of the hour.

Now that you’ve got this crash course under your belt, will you become the best designer the world has ever seen? Perhaps, not just yet – but you’ll become one that was better than yesterday. While following the laws and principles is crucial in understanding user experience and how best to design for the user, design thinking is the first step in designing UI.

from Pepper Square https://www.peppersquare.com/blog/the-principles-and-laws-of-ux-design-why-every-designer-should-know-them/

CSS container queries are finally here

I can’t contain my excitement while writing the first few words for this article. Ladies and gentlemen, CSS container queries are finally here! Yes, you read that right. They’re currently supported in Google Chrome (105) and soon in Safari 16. This is a huge milestone for web development. For me, I see it just like when we started building responsive websites via media queries, which is a game changer. Container queries are equally important (from my point of view, at least).

When I wrote the first article on container queries back in April 2021, the syntax changed several times, and I see this as a chance to write a fresh article and keep the previous one for reference. In this article, I will explain how container queries work, how we can use them, and what the syntax looks like, and share a few real-life examples and use cases.

Are you ready to see the new game-changer CSS feature? Let’s dive in.

Introduction

When designing a component, we tend to add different variations and change them either based on a CSS class, or the viewport size. This isn’t ideal in all cases and can force us to write CSS based on a variation class or a viewport size.

Consider the following example.

We have a card component that should switch to a horizontal style when the viewport is large enough. At the first glance, that might sound okay. However, it’s a bit complex when you think about it more deeply.

If we want to use the same card in different places, like in a sidebar where the space is tight, and in the main section where we have more space, we’ll need to use class variations.

.c-article {
  /* Default stacked style */
}

@media (min-width: 800px) {
  /* Horizontal style. */
  .c-article--horizontal {
    display: flex;
    align-items: center;
  }
}

If we don’t apply the variation class to the card component, we might end up with something like the following.

Notice how the card component in its stacked version is too large. For me, this doesn’t look good from a UI perspective.

With container queries, we can simply write CSS that responds to the parent or container width. Consider the following figure:

Notice how in a media query, we query a component based on the viewport or the screen width. In container queries, the same happens, but on the parent level.

What are container queries?

A way to query a component against the closest parent that has a defined containment via the container-type property.

That’s it. It’s just how we used to write CSS in media queries, but for a component level.

Container queries syntax

To query a component based on its parent width, we need to use the container-type property. Consider the following example:

.wrapper {
  container-type: inline-size;
}

With that, we can start to query a component. In the following example, if the container of the .card element has a width equal to 400px or larger, we need to add a specific style.

@container (min-width: 400px) {
  .card {
    display: flex;
    align-items: center;
  }
}

While the above works, it can become a bit overwhelming when having multiple containers. To avoid that, It’s better to name a container.

.wrapper {
  container-type: inline-size;
  container-name: card;
}

Now, we can append the container name next to @container like the following:

@container card (min-width: 400px) {
  .card {
    display: flex;
    align-items: center;
  }
}

Let’s revisit the initial example and see how we can get benefit from container queries to avoid having multiple CSS classes.

.wrapper {
  container-type: inline-size;
  container-name: card;
}

.c-article {
  /* Default stacked style */
}

@container card (min-width: 400px) {
  /* Horizontal style. */
  .c-article {
    display: flex;
    align-items: center;
  }
}

Browser support

Container queries are now supported in Chrome 105, and soon in Safari 16.

The same applies to container query units, too.

Also, there is a polyfill that you can use today. I haven’t tested it yet, but it’s within the plan.

Use cases for CSS container queries

With the stable launch of container queries in Google Chrome, I’m excited to add a new little project which is lab.ishadeed.com. This is inspired by Jen Simmons’s CSS grid experiments. It includes fresh demos for container queries that you can play with them your browser.

The lab has 10 different examples for you to explore how container queries are really helpful. I’m planning to add more in the future.

You can check them from this link. Happy resizing!

Outro

This is a big day for CSS, and I can’t wait to see what you will create with CSS container queries.

I wrote an ebook

I’m excited to let you know that I wrote an ebook about Debugging CSS.

If you’re interested, head over to debuggingcss.com for a free preview.

from Ahmad Shadeed Blog https://ishadeed.com/article/container-queries-are-finally-here/

Is Dwell Time AR’s Next Performance Metric? Part II



We recently posted a question: Does AR have a measurement problem? In short, AR marketing is so new that it hasn’t developed native metrics. That plus brand marketers’ comfort with existing metrics draws them towards established analytics like clicks and impressions.

But the issue is that those metrics were made for different formats, including online display and search ads. As such, they don’t do justice to AR’s unique abilities, including deeper depth of engagement. That depth can lead to favorable outcomes like brand awareness and conversions.

But one metric that’s starting to emerge to better evaluate AR is dwell time. It’s showing strong early signs which in turn signals AR’s depth of engagement. This experiential depth and lasting impression (e.g., brand recall) are common and longstanding objectives for brand marketers.

So for part II of this series, we’re spotlighting a few case studies from ARtillery Intelligence’s recent report, AR Marketing Best Practices and Case Studies, Vol 2. We’ve pulled a few case studies that specifically demonstrate AR’s ability to drive favorable dwell times.

When reading these mini case studies, keep in mind that AR campaign dwell times – often exceeding 1 minute – compare to online video ads that average about 20 seconds.

AR Marketing: Best Practices & Case Studies, Volume II

Next Level

To accompany and market the film release of Jumanji: The Next Level, Sony Pictures was interested in creating an AR experience for prospective filmgoers. Partnering with AR-focused creative agency Trigger, it created a game to draw fans in to the movie’s themes.

Specifically, gameplay immersed users in the world of Jumanji through both AR and audio. The latter utilized Amazon’s Lex API, letting users activate game elements through voice. This included character-driven audio playback – a natural medium for storytelling.

For example, users could say “show me Jumanji” to activate a virtual map that was overlayed in their space. They could then visit locations from the movie by activating other map locations by voice. Resulting animations included animals running through 3D scenes.

Finally, the experience had a tangible call-to-action to capture users’ interest at the right time. Upon completing the game, they were channeled into a ticket-purchasing flow by saying “buy tickets.” This melds gameplay and commerce in engaging and elegant ways.

But the real proof is in the results. The experience achieved an average dwell time of 5 minutes (more than 2.5x the web AR benchmark). It was also the first AR experience to integrate Amazon Lex and was a finalist for the 2020 Augie Award for Best AR Campaign.

Is Dwell Time AR’s Next Performance Metric?

Future Footwear

With an interest in AR’s demonstrative properties, New York fashion label Khaite released a “try-before-you-buy” experience for its Spring 2021 shoe collection. Working with creative agency ROSE, it launched a web AR campaign that let shoppers visualize the product in 3D

Specifically, users on its website or print lookbook could scan QR codes to activate 3D models in their space. This included 3D versions of Khaite’s heels, boots, sandals, and shoes that users could rotate, enlarge and inspect. It also featured realistic shadows, textures, and lighting.

The outcome of these types of AR product visualization experiences is generally to boost consumer confidence before buying. The result is often greater conversions and/or basket sizes. AR can also lessen product returns. given a more informed and confident consumer.

As for Khaite’s results and ROI specifically, it achieved a 400 percent increase in sales due to the AR experience. It also increased dwell time by more than 4 minutes. “It really feels like you’re handling the shoe in a store,” Khaite founder Catherine Holstein told Vogue.

We’ll pause there and circle back in the next case study to examine AR marketing best practices and results.

Header image credit: Trigger

More from AR Insider…

from AR Insider https://arinsider.co/2022/09/06/is-dwell-time-ars-next-performance-metric-part-ii/