Law Firm Hogan Lovells Learns to Grapple with Blockchain Contracts

A blockchain distributed ledger may not replace all lawyers, but one firm is studying how the technology could eliminate many of the manual steps typically needed to execute contracts. New York City-based law firm Hogan Lovells is experimenting with so called smart contracts and exploring the legal and organizational issues raised by the agreements executed […]

from CIO Journal. http://blogs.wsj.com/cio/2017/02/01/law-firm-hogan-lovells-learns-to-grapple-with-blockchain-contracts/?mod=WSJBlog

Designing Anticipated User Experiences

Anticipatory Design is possibly the next big leap within the field of Experience Design. “Design that is one step ahead” as Shapiro refers to it. This sounds amazing, but where does it lead us? And how will it affect our relationship with technology?

I’ve dedicated my Master thesis to this topic to identify both ethical as design challenges that come with the development of predictive UX and application of Anticipatory Design as design pattern. With as overarching question “How Anticipatory Design might challenge our relationship with technology”.

A Future Without Choice

Anticipatory Design is an upcoming design pattern within the field of predictive user experiences (UX). The premise behind this pattern is to reduce cognitive load of users by making decisions on behalf of them.

Despite its promise, little research has been done towards possible implications that may come with Anticipatory Design and predictive user experiences. Ethical challenges like data, privacy and experience bubbles could inhibit the development of predictive UX.

We’re moving towards a future with ambient technology, smart operating systems and anticipated experiences. Google Home, Alexa, Siri and Cortana are all intelligent personal assistants that learn from your behaviour, patterns and data and will likely anticipate your needs in the near future pro-actively.

Anticipated user experiences are a promising development that releases us from our decision fatigue. With the approximately 20.000 decisions we make on daily average, most of us are suffering from it.

Less Choice, More Automation

Anticipatory Design is a design pattern that moves around learning (Internet of Things), predicting (Machine Learning) and anticipation (UX Design).

Anticipatory Design Mix

Smart technology within the Internet of Things learns by observing, while our data is interpreted by machine learning algorithms. UX design is crucial for delivering a seamless anticipated experience that take users away from technology. Anticipatory Design only works when all three actors are well aligned and effectively used.

Anticipatory Design as design principle is already used in quite a few products without us being actively aware of it. Products like Nest, Netflix and Amazon’s Echo are good examples of how products learn, adjust and anticipates on given data of the user.

5 Design Considerations

Over the past few months I’ve interviewed severel experts in the field of UX and A.I. to investigate what challenges lie ahead and what considerations are there to make. The following 5 design considerations were distilled:

1. Design Against the Experience Bubble

We saw what happened with Trump, the filter bubble is real and most of us circle around in our own ‘reality’. Eli Pariser described with ‘the filter bubble’ in 2011 how the new personalized web is changing what people read and how people think. The same risk applies when devices around us anticipate our needs and act on them. An Expe- rience Bubble at which you get stuck in a loop of returning events, actions and activities. Algorithms are causing these returning events. Algorithms are binary and unable to understand meaning behind actions. It is worrisome that algorithms are not conversational. There should be a way to teach algorithms on what is right, wrong and accidental behavior.

2. Focus on Extended Intelligence Instead of Artificial Intelligence

The head of MIT Media Lab, Joi Ito, gave a very interesting perspective that coloured my beliefs regarding design principles to follow. Mr. Ito said that humanity should not pursue robotics and Generalized AI but rather focus on Extended Intelligence. This, because it is in humans nature to use technology as an extension of itself. It would feel inhuman to replace our daily activities by machines.

3. Responsive Algorithms Make Data Understandable

Current used algorithms are binary and limited to the actions and input of users. Conceptually they pretend to be ‘personal’ and ‘understandable’ about our actions but in real-life it is a matter of ones and zero’s. Algorithms are not ready for predictive systems and need to be more responsive in order to adapt to people’s motives and needs. Revisiting the feedback loop is a way to implement responsiveness. In this way, people can teach algorithms what- but foremost why they like or dislike things.

4. Personality Make Interactions More Human-Like

The Internet of Things (IoT) is growing as a market and there’s a shift from mobile first to A.I. first, meaning that users will get a more personal and unique relation and experience with their device.

When I interviewed respondents and asked them about their view on smart operating systems and Artificial Intelligence, most people referred to the movie Her as a future perspective. This perspective is intriguing. However, looking at recent developments for smart assistants like Siri, Cortana and Google Home an essential feature is missing: personality.

Personality adds huge value to our interactions with devices, because it gives a human touch. We can relate more to devices if it has a personality. Looking at services like Siri, I believe that the personality will be more relevant in the future than the amount of Gigabytes.

5. Build Trust by Giving Control and Transparancy

Today, people need to hack their own online behavior to receive the right content. It is so frustrating when you buy a gift for someone else, and get bombarded after purchase with adverts of the same product (THE SAME PRODUCT, that you just bought…).

Algorithms often misinterpret my actions. There’s room for improvement. Data interaction has become a crucial element in developing experiences for the future. Respondents that I’ve interviewed voiced their concerns about the lack of transparency and control that comes with the internet. Much personal data ends up in a ‘black box’. No one knows how our data is used and processed by big tech firms. Providing options for automation should build trust and enable growth.

UX Design is Evolving

The craft of UX Designers is changing. Increasing responsibilities, interactions and forms influence the design approach.

User Interfaces for example increasingly take different forms (e.g. voice-driven interfaces) that require a different way of design thinking. UX designers are getting more exposed to ethical design since a lot of confidentiality is involved by creating predictive user experiences.

With the dawn of fully automated consumer-facing systems, a clear view on design mitigations and guiding prin- ciples are desired since future designers will face much more responsibility concerning topics like privacy and data.

Current sets of design principles from Rams, Nielsen (1998), Norman (2013) and Schneiderman (2009) are insufficient for automation because principles regarding transparency, control, loops and privacy are missing.

The evolvement of Experience Design within a context of automation requires discussions and design practices to mitigate forecasted design challenges.

Let’s Continue This Conversation

Predictive UX is an increasingly growing field of expertise. The craft of UX design is changing with it. As we are at the shift of a new AI- driven era, it is important to share design stories, insights and practices to continue the development of Anticipatory Design as pattern, and predictive UX as a service.

Please join the movement and share your thoughts on Predictive UX & Anticipatory Design

www.anticipatorydesign.com

from uxdesign.cc – User Experience Design – Medium https://uxdesign.cc/designing-anticipated-user-experiences-c419b574a417?source=rss—-138adf9c44c—4

Material Design and the Mystery Meat Navigation Problem

Material Design and the Mystery Meat Navigation Problem

In March 2016, Google updated Material Design to add bottom navigation bars to its UI library. This new bar is positioned at the bottom of an app, and contains 3 to 5 icons that allow users to navigate between top-level views in an app.

Sound familiar? That’s because bottom navigation bars have been a part of iOS’s UI library for years (they’re called tab bars in iOS).

Left: Material Design’s bottom navigation bar | Right: iOS’s tab bar

Bottom navigation bars are a better alternative to the hamburger menu, so their addition into Material Design should be good news. But Google’s version of bottom navigation bars has a serious problem: mystery meat navigation.

Whether you’re an Android user, designer, or developer, this should trouble you.

What’s mystery meat navigation, and why’s it so bad?

Mystery meat navigation is a term coined in 1998 by Vincent Flanders of the famous website Web Pages That Suck. It refers to buttons or links that don’t explain to you what they do. Instead, you have to click on them to find out.

(The term “mystery meat” originates from the meat served in American public school cafeterias that were so processed that the type of animal they came from is no longer discernible.)

An example of mystery meat navigation | Source

Mystery meat navigation is the hallmark of designs that prioritize form over function. It’s bad UX design, because it emphasizes aesthetics at the cost of user experience. It adds cognitive load to navigational tasks, since users have to guess what the button does. And if your users need to guess, you’re doing it wrong.

You wouldn’t want to eat mystery meat—similarly, users wouldn’t want to click on mystery buttons.

Strike 1: Android Lollipop’s Navigation Bar

Material Design’s first major mystery meat navigation problem happened in 2014 with Android Lollipop.

Android Lollipop was introduced in the same conference that debuted Material Design, and sports a redesigned UI to match Google’s new design language.

Navigation bar in earlier versions of Android

One of the UI elements that got redesigned was the navigation bar, the persistent bar at the bottom of Android OS that provides navigation control for phones without hardware buttons for Back, Home and Menu.

In Android Lollipop, the navigation bar was redesigned to this:

Navigation bar, Android Lollipop and up

See the problem?

While the previous design is less aesthetically appealing, it’s more or less straightforward. The Back and Home icons can be understood without the need for text labels. The 3rd icon is a bit of a mystery meat, but on the whole, the UX of the old navigation bar wasn’t too bad.

The new bar, on the other hand, is extremely pretty. The equilateral triangle, circle, and square are symbols of geometric perfection. But it’s also extremely user-unfriendly. It’s abstract—and navigation controls should never be abstract. It’s full-blown mystery meat navigation.

The triangle icon might resemble a “Back” arrow, but what does a circle and a square mean in relation to navigation control?

Making sense of the navigation bar icons

Strike 2: Floating Action Buttons

Floating action buttons are special buttons that appear above other UI elements in an app. Ideally, they’re used to promote the primary action of the app.

Specs for the floating action button | Source

Floating action buttons also suffer from the mystery meat navigation problem. By design, the floating action button is a circle containing an icon. It’s a pure-icon button, with no room for text labels.

The truth is that icons are incredibly hard to understand because they’re so open to interpretation. Our culture and past experiences inform how we interpret icons. Unfortunately, designers (especially, it seems, Material designers) have a hard time facing this truth.

Need proof that icon-only buttons are a bad idea? Let’s play a guessing game.

Below is a list of what—according to Material Design’s guidelines—are acceptable icons for floating action buttons. Can you guess what each button does?

Mystery button 1

Ok, that’s a simple one to warm you up. It represents “Directions”.

Mystery button 2

What about this? If you’re an iOS or Mac user, you might say “Safari.” It actually represents “Explore.”

Mystery button 3

Things are getting fun (or frustrating) now! Could this be “Open in contacts”? “Help, there’s someone following me”? Perhaps this is a button for your “Phone a friend” lifeline.

Mystery button 4

Hang on, this is the button for “Open in contacts.” Right? Or is this “Gossip about a friend” since the person is inside a speech bubble?

Ready for the final round? Here’s the worst (and most used) icon:

Mystery button 5

You might think the “+” button is rather simple to understand—it’s obviously a button for the “Add” action. But add what?

Add what: that’s the problem right there. If a user needs to ask that question, your button is officially mystery meat. Sadly, developers and designers of Material Design apps seem to be in love with the “+” floating action button.

Precisely because the “+” button seems so easy to understand, it ends up being the most abused icon for floating action buttons. Consider how Google’s own Inbox app displays additional buttons when you tap the “+” floating button, which is not what a user would expect:

The “+” button opens up a menu of… more buttons?

What makes things worse is how the same icons have different meanings in different apps. Google used the pencil icon to represent “Compose” in Inbox and Gmail, but used it to represent “Edit” in its photo app Snapseed.

Same icon, different meanings: “Compose” in the Gmail and Inbox apps, “Edit” in the Snapseed app

The floating action button was intended to be a great way for users to access a primary action. Except it isn’t, because icon-only buttons tend to be mystery meat.

More on floating action buttons:

Strike 3: The New Bottom Navigation Bar

This brings us to the bottom navigation bar, introduced in March 2016.

For bottom navigation bars with 3 views, Google’s guidelines specify that both icons and text labels must be displayed. So far, so good: no mystery meat here.

Bottom navigation bar with 3 views: so far, so good

But for bottom navigation bars with 4 or 5 views, Google specifies that inactive views be displayed as icons only.

Bottom navigation bar with 4 views: mystery meat

Remember how hard it was to guess what the floating action button icons mean? Now try guessing a row of icons used to navigate an app.

This is just bad UX design. In fact, the Nielsen Norman Group argues that icons need a text label, especially navigation icons (emphasis theirs):

“To help overcome the ambiguity that almost all icons face, a text label must be present alongside an icon to clarify its meaning in that particular context.… For navigation icons, labels are particularly critical.”

That Material Design’s newest UI component condones mystery meat navigation is not only frustrating, but also weird. Why should text labels be shown when there are 3 views, but be hidden when there are 4–5 views?

An obvious answer would be space constraints.

Except tab bars in iOS manage to contain 5 icons, and still display the icon and text label for each of them. So space constraint isn’t a valid reason.

iOS tab bar in the App Store, Clock and Music apps: 5 icons, all with text labels

Google either decided that icons can sufficiently represent navigational actions (which is bad), or they decided that aesthetic neatness is more important than usability (which is worse). Either way, their decision worsened the UX of millions of Android users.

Material Design and Form over Function

When Material Design was launched in 2014, it was to much fanfare. It’s bold, and rides on (and one-ups) the flat design trend. The pairing of vibrant colours and animations make it pretty to look at.

“Make it pretty!” — Material Design designer | Source

But perhaps it’s a little too pretty. Perhaps while working on Material Design, the designers got a little carried away.

Time and again, Google’s guidelines for important buttons and bars seem to prioritise form over function. Geometric prettiness was chosen over recognisability in Android’s navigation bar. Aesthetic simplicity was championed in floating action buttons, turning them into riddles in the process. Finally, visual neatness was deemed more important than meaningful labels in bottom navigation bars.

That’s not to say that mystery meat navigation is a Google-only problem. Sure, you can find mystery meat in iOS apps too. But they don’t usually appear in critical navigational controls and promoted buttons. They also aren’t spelt out specifically in design guidelines to be mystery meat.

Speed graph showing the correct (blue) acceleration for animations

If Google designers could devote time and effort into creating speed graphs for animations, perhaps they could spend a little time to make sure their designs aren’t mystery meat.

After all, an animated mystery button is still less delightful than a static but clearly labelled button.

from Sidebar http://sidebar.io/out?url=https%3A%2F%2Fmedium.freecodecamp.com%2Fmaterial-design-and-the-mystery-meat-navigation-problem-65425fb5b52e%23.5w65oqu37

Data Humanism, the Revolution will be Visualized.

SNEAK CONTEXT IN. (ALWAYS)

A dataset might lead to many stories. Data is a tool that filters reality in a highly subjective way, and from quantity, we can get closer to quality. Data, with its unique power to abstract the world, can help us understand it according to relevant factors. How a dataset is collected and the information included — and omitted — directly determines the course of its life. Especially if combined, data can reveal much more than originally intended. As semiologists have theorized for centuries, language is only a part of the communication process — context is equally important.

This is why we have to reclaim a personal approach to how data is captured, analyzed and displayed, proving that subjectivity and context play a big role in understanding even big events and social changes — especially when data is about people.

Data, if properly contextualized, can be an incredibly powerful tool to write more meaningful and intimate narratives.

To research this realm, I undertook a laborious personal project: a yearlong hand-drawn data correspondence with information designer Stefanie Posavec. We have numerous personal and work similarities — I am Italian and live in New York, and she is American and lives in London. We are the exact same age, and we are only-children living far away from our families. Most importantly, we both work with data in a very handcrafted way, trying to add a human touch to the world of computing and algorithms, using drawing instead of coding as our form of expression. And despite having met only twice in person, we embarked upon what we called Dear Data.

For a year, beginning Sept. 1, 2004, Posavec and I collected our personal data around a shared topic — from how many times we complained in a week, to how frequently we chuckled; from our obsessions and habits as they showed up, to interactions with our friends and partners. At the end of the week we analyzed our information and hand-drew our data on a postcard-sized sheet of paper, creating analog correspondence we sent to each other across the Atlantic. It was a slow, small and incredibly analog transmission, which through 52 pretexts in the form of data revealed an aspect of ourselves and our lives to the other person every week.

We spent a year collecting our data manually instead of relying on a self-tracking digital app, add- ing contextual details to our logs and thus making them truly personal, about us and us alone.

For the first seven days of Dear Data we chose a seemingly cold and impersonal topic: how many times we checked the time in a week.

On the front of my postcard, (as shown above) every little symbol represents all of the times I checked the time, ordered per day and hour chronologically — nothing complicated. But the different variations of my symbols on the legend indicate anecdotal details that describe these moments: Why was I checking the time? What was I doing? Was I bored, hungry or late? Did I check it on purpose, or just casually glance at the clock while occupied in another activity? Cumulatively, this gave Posavec an idea of my daily life through the excuse of my data collection — something that’s not possible if meaning isn’t included in the tracking.

As the weeks moved on, we shared everything about ourselves through our data: our envies, the sounds of our surroundings, our private moments and our eating habits.

We truly became friends through this manual transmission. And in fact, removing technology from the equation triggered us to find different ways to look at data — as excuses to reveal something about ourselves, expanding beyond any singular log, adding depth and personality to quantitative bits of information.

In a time when self-tracking apps are proliferating, and when the amount of personal data we collect about ourselves is increasing all the time, we should actively add personal and contextual meaning to our tracking. We shouldn’t expect an app to tell us something about ourselves without any active effort on our part; we have to actively engage in making sense of our own data in order to interpret those numbers according to our personal story, behaviors and routine.

While not everyone can do a project as hyper-personal as this one, data visualization designers can make their interpretations more personal by spending time with any type of data. This is the only way we can unlock its profound nature and shed light on its real meaning.

from Sidebar http://sidebar.io/out?url=https%3A%2F%2Fmedium.com%2F%40giorgialupi%2Fdata-humanism-the-revolution-will-be-visualized-31486a30dbfb%23.y7krgbafl

Making Chatbots Talk — Writing Conversational UI Scripts Step by Step

Making Chatbots Talk — Writing Conversational UI Scripts Step by Step

As a content writer, working in UX design agency, I’ve learned to accept the fact that visuals usually have much bigger impact than the text. From my perspective, this is a bit frustrating. So when my team was faced with the task of designing a website chatbot, I was really excited: finally the time has come for writing to take over!

In this article, I want to focus only on writing the script, describing the whole process step by step. The complete case study of designing the chatbot was written by Leszek Zawadzki.

First Things First

As the chatbot was to be represented by our client’s brand hero — Cody, the script had to match his friendly and playful personality. The first ideas came to my head as soon as I’ve heard about the project: I immediately started creating the first conversation scenarios in my mind. But, soon I realized the bots and user exchanges are all quite meaningless. Yes, small talks are good for starters, but the script has to fulfil some goals. Determining what these are and figuring out the ways to fulfil them should, then, be the first thing to do.

Step 1: Setting the Goals

The main aims of our client — a web development company — was using conversational UI website to present their skills and services as well as increase brand awareness. Focused on these two, my team created a list of end goals to which the conversation was supposed to lead:

  • the user visits company blog
  • the user shares feedback about the company blog
  • the user leaves his/her email
  • the user leaves information about his/her occupation
  • the user visits services page
  • the user visits about us page
  • the user visits main company page or one of the landing pages
  • the user contacts the company
  • the user shares the chatbot website
  • the user bookmarks the chatbot website
Final version of end goals on the whiteboard.

With the end goals determined, we knew exactly where should the conversation with the bot lead to.

Step 2: User Research

Since end goals are well… at the end, we knew it’s crucial to write the script in such a way that the user feels engaged enough throughout the whole conversation to reach that final point. Unfortunately, it’s rather difficult to entertain someone you know nothing about and it’s even harder when your conversation is taken out of context. In everyday life, you always have some basic idea about your interlocutor (even if it’s just your first impression based on their appearance) and your meeting usually has some, more or less obvious, reason.

We wanted to achieve a similar conversation background, at least to the extent it’s possible in the chatbot script. It didn’t seem possible without some kind of user research. I started with entry points, writing down all the possible contexts the user might enter the conversation: Medium article, Facebook post, Twitter ad and so on. Then, I created a basic profile of each type of user, including their profession, interests, lifestyle, and the reasons why they might decide to chat with a bot.

Basic persona sketch.

In the end, the different types of users were organized into two main groups: 1) potential clients of the company 2) people interested in the chatbot per se.

Step 3: Initial Transcript Form

Now I was ready to write the script. Or at least, I thought I was. I took A4 pad (don’t think I’m crazy, I just like the old-school methods) and got down to work. 5 minutes later I already knew I had one more thing to figure out: “How the hell am I going to write this down?” Each response the user can choose from means starting another conversation, so should I write them all separately, cross-reference, do a tree diagram? I decided the last option would be the best and I think it was. I had to rush, though, to the nearest stationery store to buy large bristol board, since it turned out my tree had been sprouting branches like crazy.

Early stage of transcript in the tree diagram form.

Overall, when the script was finished, I was pleased with the form I had chosen. On a tree diagram the conversations were clearly separated and at the same time visible all together. What’s more, using arrows when the different parts of talk meagered helped to save time on rewriting the same part a couple of times.

Step 4: The Script

Finally, after a few hours of work, I reached the phase I initially thought will be the first and also the last one — writing. In fact writing per se took less time than the ideation process that accompanied it. Before I composed any part of the conversation, I had to take into account a lot of different things.

Open-ended vs. Closed-ended questions
We wanted the conversation to feel as natural as possible, so initially, I planned to include quite a few open-ended questions. This would definitely be beneficial from the point of view of users who would have freedom to speak (or rather write) what they like. However, since our bot wasn’t supposed to be based on AI, it would be extremely difficult to create relevant responses. No matter how hard you try, you won’t be able to predict what the user might say and any time you fail to provide a relevant answer the talk becomes nonsense or at least awkward. Also, in the case of open-ended questions, we would run the risk of the conversation drifting away from the end goals.

We wanted to avoid that, even at the cost of limiting the interlocutor freedom, so in the end I decided to keep only two open-ended questions to which the bot’s answer can stay the same, no matter what the user writes.

1) Question about the user name to which Cody would always* answer “Nice to meet you” 
2) Question about the user profession which, apart from 3 pre-defined options to choose from, would feature a text input (these would help to filter the users from client and non-client group). Here whatever* the user says, Cody can express interests and pass to another topic.

*That’s what I thought until I realized, I should also take into account…

Random and irrelevant answers
Yeah, even if you ask a simple question, like “What’s your name”, you have to be prepared for the possibility that someone won’t give the most expected answer. Some people will for sure try to challenge the bot by typing swear words or simply gkbbsdfjsdtvbndxus. Of course, you may just ignore that and let your bot politely say “Hi!” to Supercalifragilisticexpialidocious or fuckerfuck, but I decided to solve this problem by preparing a special answer for such circumstances.

Chatbot’s reaction to irrelevant user response.

Leszek says more about dealing with irrelevant answers in his article.

Context
In real life, the circumstances of the talk are always meaningful and shape the conversation in some way. It may seem that in the case of chat with a bot, the context will always be the same — the user enters the site and that’s it. But the source he/she enters from, time and frequency matter and it would be a pity to ignore them.

The first thing I wanted to do then, was to take advantage of cookies to determine if a given user is talking to Cody for the first time. If it’s a re-visit, instead of asking the user for his/her name again (which would be quite strange if Cody was a real person), I decided to refer to the previous meeting.

Using cookies to recognize revisiting users.

Another way to give conversation some context is by using referral URL analysis to determine the source the user enters from. No matter if it’s an ad, social media or blog post, referring to it can be a good starting point for the conversation, which will feel much more personalized for the user. The same goes for the device the visitor browses the site from.

In Cody’s case, mobile or desktop context had a big influence on the script. Unfortunately, I realized that at the point it was almost finished. I included in the script a fragment where Cody was showing the keyboard shortcut for “add to bookmarks”, encuranging the user to press Cmd/Ctrl + D. Probably because I liked that part so much, I somehow didn’t realize it won’t make much sense on mobile and later was forced to add an alternative version for that part, which with an almost finished script wasn’t an easy task. 
 
Fulfilling Goals
As we already had the user groups determined, all I needed to do is to create a path that would lead them to the end goals. First, I linked different user groups to the goals that should be the most beneficial to achieve by them from the company point of view.

1) Potential clients were to be directed to one of the company pages and encouraged to contact or leave their email. 
2) Those interested in the bot per se were to visit the company blog and share feedback on it.

Additionally, sharing the chatbot website and bookmarking it applied to both groups.

I decided it will make sense to create two separate scripts and link them only at the beginning and end of the conversation that I wanted to be similar in both cases.
 
The script for clients seemed to be more complicated. The most important goal here was to direct the user to the appropriate company page or promotional landing page. To do that, I had to collect some basic information about the visitor. The initial research I made helped a lot here. Knowing that most of the potential clients would be designers, developers or business owners, choosing one of these options when Cody asks them about their job, determined the next steps. In the case of first two clients, these were limited to providing the right page or contact person, but in the business owner case, the next filtering was necessary.

Using chatbot to gather information about the visitors.

Having the more difficult part of the conversation done, the part for non-clients went pretty fast with Cody speaking mainly about his story and origin. A pleasant chit-chat, you usually exchange with a new person, was subtly filled with utterances focused on increasing brand awareness.
 
Loops
To avoid creating hundreds of separate conversations and risking some important information are missing in one of them, I decided to depend on what I called loops: one conversation was split in two or three, proceeded for some time and then a couple of different talk branches met in the same place. That way, we were sure things that really matter from the point of view of project goals will be featured in each chat scenario.

There was only one problem with that, once or twice it turned out the user was asked the same question for the second time. Spotting that at the point where I already had half of the script was a huge frustration. I had two options: rewrite a big portion of the script, this time making sure none of the loops lead to repetition, or… find another solution. I instantly decided on the latter, but it took me some time to come up with a remedy.

Dealing with repeated parts of the conversation.

In the end, it turned out to be very simple. If people can forget and ask about the same thing a couple of time, what’s wrong with Cody doing the same?

Step 5: Development Instructions

Ok, the script is ready and checked a few times for potential holes or mistakes. All that needs to be done is hand it to developers in some digestible form — we couldn’t give them bristol tree diagram with my terrible handwriting and force them to decipher and type it all by themselves. Our script needed to be developer-friendly.

First idea was to use some kind of software like Xmind of Freemind that would enable me to reconstruct the tree form, but when I looked at two bristols that covered half of the wall, I decided it wouldn’t be the best solution. Then I thought about cross-references, however, I was worried it might be too many of them which would make the script very complicated to read. I started with writing down all conversations separately, then.

After 3 hours of writing, copying and pasting, I had almost a hundred conversations and, though I’m not the best at maths, I estimated we’ll finish with 480 different versions, if I continue this way. It didn’t make sense so after discussing the script form with my them, we’ve ended with writing it down in the cross-reference form.

Final version of the script.

The whole conversation was divided into unique parts (any fragment that repeated at some point, appeared only once) — each named and accompanied with a number and a letter that referenced to one another. The developers were provided with the key to read the script.

Key
/ diffrent responses that:
- change the conversation if the sign (1) or (1A) appears
- not change the conversation if no sign appears
| different user responses followed by relevant bot responses, without changing the course of conversation
[1A] parts of conversation
(1A) cross-reference to another part of conversation

12 pages instead of 86 I already had (remember I’ve done less than a quarter of it before I gave up) was an achievement. And though it took us a moment to explain the script and the key to developers, in the end they knew exactly how to approach it. Our job was done.

Conclusions

Writing doesn’t usually seem like much of a challenge. Good coffee, text editor, a bit of inspiration and voilà. Not this time… Creating conversational UI script is a challenge, especially when you do it for the first time. That project taught my team a lot and next time we’re faced with preparing chatbot script, we’ll already have a set of guidance:

  • start with determining the main objectives of the script — end goals the users should be lead to
  • do research to get at least the basic idea of who the chatbots visitors will be
  • prepare the script outline first, deciding on the type of questions to use and a basic path leading users to the goals
  • think over the context of the conversation (device used, entry points, reasons to enter the site) and accommodate the script to it
  • try to predict less expected situations such as revisits or irrelevant answers
  • decide on the form of the transcript, before writing
  • think of the structure of the conversation — should it consist of separate chats or should they be connected at some point
  • talk to developers and decide in which form should the script be digitalized

And last but not least: think twice before rushing into doing something.

from uxdesign.cc – User Experience Design – Medium https://uxdesign.cc/making-chatbots-talk-writing-conversational-ui-scripts-step-by-step-62622abfb5cf?source=rss—-138adf9c44c—4

A framework for designing a better user onboarding experience

After weeks of brainstorming, sketching, designing, and arduous development, your app’s ready to launch. Your team’s enthusiasm is through the roof.

This is the moment of truth: Will the app be successful?

User onboarding has so much to do with an app’s success—really, it can make or break it. Done right, it’ll result in people coming back to use the app again and again.

Great user onboarding feels effortless, demonstrates value, and bridges the gap between users’ expectations and what the product can help them achieve.


Related: 5 key lessons for successful user onboarding


designers-04

Distilling the experiences we’ve had building apps here at tapptitude, we’ve come up with a straightforward framework called instruction-action to better understand and design effective user onboarding flows for mobile products.


“A great user onboarding flow feels effortless and demonstrates value.”

The instruction-action framework is based on strategically playing with the 2 building blocks of the user onboarding process:

  1. Instruction elements
  2. Action elements

Instruction elements

Instruction elements are your best friends. Be it annotations, modals, or any other bits of copy, use instruction elements to efficiently communicate to the user how to use the app so they can discover its core value.

Some of the most popular instruction elements:

  • Splash screen
  • Welcome screens (with benefits or features)
  • Annotations
  • Permissions
  • Explanatory modals 


People won’t come back to use your app a second time if they don’t immediately understand how to get the most out of it.


onboarding-elements

Action elements 

Nir Eyal’s Hook model says the actions someone takes in a product are triggered by carefully designed persuasion elements. Ideally, such elements are a combination of motivational and instructional content so that the user has the reasons to perform a task and also knows how to do so.


“Users need to immediately understand how to get the most out of your app.”

When it comes to action elements, think of strategic design elements: clear calls to action and suitable visuals that act as a trigger for the user. The smallest arrow can have a big impact. Signal to the user that they’re on the right track. Instead of guessing whether they should click or swipe, the user will feel encouraged to take action.

Examples of action elements:

  • Sign in
  • Sign up
  • Allow access
  • Actionable tool tip


Onboarding-tr

In any given user onboarding process, instruction elements and action elements work together to lead the user where they can experience the product’s value. How you combine these 2 types of elements depends on the purpose of the screens and the overall logic of the onboarding process.

Some typical examples of purposes app screens could have:

  • Browse through the primary features of the app
  • Present best practices within the app
  • Collect basic user account information in order to set up the app, tailored to the person using it
  • Explain a feature while allowing the user to experience it at the same time; useful and fun
  • Upsell your app’s capabilities: show people the app’s capabilities and offer a glimpse of what more it could do with just a little financial support on their behalf


“Make your app’s core value obvious every chance you get.”

Now it’s time to assess how to work with instruction and action elements.

Sometimes people don’t discover an app’s core value the first time they use it. So it’s the responsibility of the user onboarding process to get that person back on track and into the conversion flow. 

Based on this premise, we realized that we’ve got 2 major areas where onboarding happens: outside product and inside product

Outside-product onboarding

What are the chances of someone downloading and using your app without gathering the least bit of information about the product and believing in its promise?

Close to zero.

User onboarding ends with the user experiencing their ‘aha’ moment. But it all starts with their first encounter with the product, or what we like to call outside-product onboarding. And that usually happens long before using the product itself—on social media, the product’s website, or in the App Store. During these first encounters, the user establishes their expectations about the product and jumps inside the first phase of your conversion flow. 

And this is just step 1 of outside-product user onboarding. 

Think about Deliveroo’s App Store screens. Isn’t this what you’ve always wanted from a food delivery app? They’re spot on in presenting the benefits of the app.

Think about Deliveroo’s App Store screens. Isn’t this what you’ve always wanted from a food delivery app? They’re spot on in presenting the benefits of the app.

Say you’ve convinced someone to download your app, and then they open the app for the very first time. What will this experience entail?

This is where step 2 of outside-product user onboarding takes place. 

You could use a few instruction elements (nice visuals and bits of copy) and action elements to get the user inside the app. Show the user the most compelling benefits of your app in a few nicely designed screens, and encourage them every step of the way to try it out for themselves.

Related: Copywriting principles that will make new users love you

Another approach: Make it clear to users that sharing information about themselves will be rewarded in the long run. Use instruction elements to make your intentions clear, and action elements to point the user in the right direction. 

If you’ve managed to spike the user’s interest and motivate them to give the app a chance, you can move on to the second major area of mobile user onboarding.

Inside-product onboarding 

Once the user is inside the app, they’ll expect to find everything they’ve been promised in the previous onboarding phase. You don’t want them to get lost, wondering what to do next. So you have to show them where to go. 


Instagram does a great job explaining the benefits of allowing access to photos and the microphone beforehand.

Instagram does a great job explaining the benefits of allowing access to photos and the microphone beforehand.

The focus of the onboarding process should be to guide the user towards the core value proposition of the app. Help them discover how the app can bring an improvement to the way they’ve been doing things up until now, or how much fun the app is. Once this is done, the user will be a lot more open to discover the rest of the app’s features, one at a time. 

Let’s take a look at a few examples of how you could guide the user through the core use case of your app.

For a social app, people will expect to be able to contact a friend as soon as possible in order to decide if this product brings anything new to the table. Is the core feature of the product the ability to chat with friends and family, or is the user’s profile equally important? Show the users how to set up a profile with a minimum amount of information, how to add friends (offer the option to import contacts), and enable them to chat freely. Then cross your fingers. 


“A good onboarding flow guides new users towards the app’s core value proposition.”


For a calendar app, it’s all about how well the product organizes the user’s schedule. Allow the user to either import or sync existing calendars and events from other platforms, if they want. Quickly enable the user to get the feel of the product. What’s the design of this calendar? Is the schedule easy to analyze? What kind of customization is available for the different types of events?

For a photo editing app, show the user how effortless the entire process is. Enable them to either pick an image from the camera roll or take a picture on the spot, guide them through the editing process using annotations where necessary, and finally present them with the sharing options. For people working with picture editing apps, a fluent editing journey is as important as the quality of the product.

A good mobile product deserves a solid user onboarding strategy. Don’t limit yourself to the product itself. Start planning by taking a close look at your app’s core value—and make it obvious every chance you get. Be it an ad for the app or an onboarding screen, everything has to make a valuable statement about the product and its capabilities.

Keep reading about onboarding

Sinziana Chitea
Content Marketing Specialist at tapptitude, a native mobile app development agency, where she successfully teases everyone into taking cute pictures for Instagram. Discovering the tech world bit by bit, and writing it all down in the agency’s blog. Enthusiastic about everything visual. And sweets and dogs.

from InVision Blog http://blog.invisionapp.com/user-onboarding-framework/

Un-Abandon My Cart: 5 Ways to Improve Checkout Conversion Rates

A potential customer has spent time exploring your product listing. They’ve looked into your various offerings on a specific product category and added their product of choice to the cart. Then, all of a sudden, poof – the cart has been abandoned. Gold turns into stone.

For e-commerce businesses, shopping cart abandonment may be the most frustrating thing that can happen. In this article, we would like to help you out with combating it by sharing five ways to improve your checkout conversion rate.

But, first things first:

Why do users abandon their carts?

Usabilla defines Shopping Cart Abandonment as “the rising phenomenon of users filling their virtual carts with everything they want, but leaving your site before they complete their purchase.”

There are a lot of reasons why online shoppers decide to discontinue the checkout process. Some reasons are wholly of the discretion of the shoppers, however, some can be attributed to our own wrongdoings.

In Usabilla’s ebook, Combat Shopping Cart Abandonment, customer experience is identified as an integral part of resolving shopping cart abandonment. Sudar explains, “Customer experience (CX) is an emerging trend in the world of business. Although it can be considered a derivative of customer service, it very much stands on its own as a discipline to be valued. Practicing good CX can be a way to alleviate and prevent shopping cart abandonment.”

Since we cannot control the personal decisions that shoppers make, what we can deal with are the factors that would otherwise have them complete the checkout process. The following are the reasons why shoppers are abandoning their carts:

 1. Extra costs are too high

“Business is business,” they always say. And it is true that in order for your business to survive, you will have to make profits, which means intelligently pricing your products. In doing so, you’ll need to consider all expenses related to the sale of a product, including shipping, taxes, fees, and the cost of the good itself. However, what is unwise is making it harder for customers to actually buy your products because of too many extra costs.

2. Registration is needed before checking out

Some shoppers do not appreciate account registration – some don’t have the time, while some just don’t want to share that much information about themselves. So, the hassle of having to create an account before being able to buy is simply causing some shoppers to decide not to continue with the checkout process.

c3v88boorom-bench-accounting (1)

3. The checkout process is complicated

Speaking of the checkout process, cart abandonment also occurs when the checkout process itself is too complex. What is a complicated checkout process? One that asks a customer to fill out too many forms, and provide too much information that isn’t necessary for completing the sale.

4. The total amount is not available upfront

Shoppers tend to turn away from e-commerce sites that do not display the final price at first glance of the product. This triggers a fear in shoppers that they might get overcharged, because why would the price have to be hidden, right?

5. Errors are experienced on the e-commerce site

Another major turn off is website errors. Buttons that cannot be clicked, bookmarked items that return an error instead of being redirected to a similar product page, or worse, the website itself crashes. We suggest that you study e-commerce analytics to determine which pages need optimized or check your favorite webmaster tool for crawl errors and more.

Optimize your checkout

Now you know the five most common reasons for shoppers to abandon their carts. The next question is how to address these in order for your conversion rate to go up. Here are some checkout optimization ideas that would really help in reducing cart abandonment:

1. Optimize product pages

Design your product pages well. Make sure that the photos are of a high quality and that the site is also optimized for mobile. Also make use of product descriptions so that your shoppers know more about the products you are selling.

Screen Shot 2017-01-30 at 11.15.03

2. Try a single-page checkout process

Shorten your checkout process to a single page. Ask only for information that is necessary for the sale to be completed, plus offer the use of “same as billing address” to auto-fill the shipping address.

3. Say goodbye to mandatory sign-up

As it turns off a lot of potential customers, your mandatory sign-up policy must be abolished. Offer sign-up but don’t require it. Instead, allow shoppers to complete the checkout process as a guest, asking only for the necessary information.

Screen Shot 2017-01-30 at 11.05.18

4. Offer free shipping and return

Smart pricing will allow you to offer free shipping and returns to your shoppers. Doing so will not only help prevent cart abandonment but will even help you gain even more sales. Again, price intelligently. Free shipping only works if you would still realize profits.

5. Form optimization

It is necessary for your forms to be optimized, here are three top tips to consider:

i) Google autofill

With the help of Google Autofill, certain forms will be filled automatically. The usual forms that are automatically filled out are those for name, address, and email address.

ii) Think mobile

Again, consider mobile in your optimization initiative. Filling out forms on mobile devices is not as easy as doing so on desktops and laptops. Make sure that textboxes are not too crowded and that autofill is also usable.

iii) Payment (secured payment gateway)

In optimizing payment forms, put security as a priority. Ensure that when you capture payment information (especially debit and credit card numbers), these are encrypted, tokenized, or both. Security should never be taken for granted!

Bonus Tip: Retargeting and abandonment email

pexels-photo-242494

Our tips for optimization are sure to help you in addressing your cart abandonment problem. However, we can’t promise 100% conversion rates. In order to deal with those that still abandon their carts, make use of retargeting and email. Give them a reason to continue with their shopping and that this time, they should complete it.

Abandonment email refers to the effort in reaching out to cart abandoners. Basically, an email is sent to a shopper who leaves their cart, usually with a reminder that their shopping cart is pending. There may be offers to make the shopper more interested in the product.

Retargeting, on the other hand, ensures that ads related to the product/s your user left behind will be seen across other websites and platforms. Hopefully, this visual reminder will trigger a purchase decision next time around.

Un-abandon your carts with our optimization tips

The tips above deal directly with factors that can lead to shoppers are abandoning their carts (at least those that we can control). If you follow our optimization ideas, you will surely enjoy higher conversion rates. So, optimize your e-commerce site now and turn it into something that shoppers would regret abandoning.


Shopping Cart Abandonment - Usabilla

The post Un-Abandon My Cart: 5 Ways to Improve Checkout Conversion Rates appeared first on Usabilla Blog.

from Usabilla Blog http://blog.usabilla.com/un-abandon-cart-5-ways-improve-checkout-conversion-rates/

Draw sketches for virtual reality like pro

Virtual reality is a brand new frontier, but there are already tools that cover almost all the steps of creating a new experience (thank you, game industry).

But sketching is so ancient, as we still are using the same tools as ages before, a pencil and a piece of paper. I don’t want to say that it’s so bad. I really like making fast drafts. But VR gives us new boundaries as a comfortable field of view, no frame around, the ability to move your head and so on.

I used to do a lot sketches of only part of the view, or views from different projections (for example front + top). Also, I found this cool template for drafting stories. It is good, but doesn’t reflect how the user will see it. Also, there is the solution for the Sketch App, but I’m an old fashion guy who likes taking the pencil first.

So I tried to use 360 panorama grid to align my sketches to wide angle view. It was looking not very spectacular, till I scanned it and put on my face.

And it is working! So I’ll be glad to share my process step by step:

  1. Upload templates here:

2. Print any of them.

3. Take your pencil and draw. But let me explain what those marks mean:

Each vertical line measures 10° of the angle of view. Each curved line measures the vertical angle of view by 10°.

To start you should remember that in the centre is the front. On the left and right borders is back.

There are two “rectangles” that show the comfortable and maximum field of view by Alex Chu. It is just to be aware of scale, which is not easy catch at the beginning.

4. Scan or take the photo of your art.
Tip: You can draw on Wacom or any other pen tablet as well.

My 10min sketch

5. Crop the image with a ratio of 2:1. Marks on the corners should help to do this.

6. Open in any 360 photos viewer.
On Desktop I’m using GoPro VR Player.
On iPhone, I’m using Street view app by google.
Instruction how to import images. Tip: Don’t forget that your image should have at least 14 megapixels (5,300 by 2,650 pixels) and in JPG format.

7. Voila! You easily have your sketches in virtual reality.

Of course, it is a little bit of a bulky solution, but it’s the fastest way that I have found.

If you have any comments, questions or ideas, just let me know.

P.S. Here is a demo created by Andy Stone using this template.

from Sidebar http://sidebar.io/out?url=https%3A%2F%2Fvirtualrealitypop.com%2Fvr-sketches-56599f99b357%23.7a7r5qfsj

A New Tool to Analyze Medical Records

Health-care companies are collecting more data on patients yet, they are struggling to realize its full value because much of it can’t be analyzed in a traditional database. McKesson Corp., the drug distributor and health-care technology firm, says it is developing a new analytics tool that could help solve that problem for oncologists by using […]

from CIO Journal. http://blogs.wsj.com/cio/2017/01/25/a-new-tool-to-analyze-medical-records/?mod=WSJBlog

Designing the Internet of Things: 5 Critical Design Principles for IoT

Designing the Internet of Things

5 Critical Design Principles for IoT

Imagining an IoT ecosystem — The old school way!

IoT continues to gain tremendous momentum, and even more organizational interest, to the tune of multi-million dollar investments. Companies like Samsung, Google, Ford, GE, and more have made tremendous organizational shifts, in order to fully understand and contribute markedly to what many are calling the next big technological revolution.

It’s both exciting and surreal. Exciting because of the potential to create intelligent environments. And surreal because many people still don’t know what IoT is, what it means, and why it’s important to them. And it’s this very mystery of IoT that should guide the next wave of IoT experiences.

As IoT continues to enter the mainstream, it needs to elevate beyond the technology, beyond the novelty of simply being connected. The value of IoT products needs to be clearly understood by consumers and seamlessly adapted to their lives.

This is the challenge that all IoT products and/or experiences will need to solve, and why traditional UX/UI designers will play a key role in IoT’s continued evolution. As a developer of UI’s for almost 20 years now, I’ve recently begun shifting my focus towards IoT, beginning to craft some ideas around general design principles.

Here are 5 principles that I believe are critical as we begin to consider the role of digital design in IoT:

from Sidebar http://sidebar.io/out?url=https%3A%2F%2Fiot-for-all.com%2Fthe-iot-design-principles-fe04e635f43a%23.mjayvuz8r