Why user onboarding is the most important part of the customer journey

The statistics don’t lie. User onboarding is the most important part of the customer journey by 2.6X. Getting users hooked, keeping them engaged, and retaining them in the long term all depend on delivering a great first-time user experience.

from The Appcues Blog https://www.appcues.com/blog/user-onboarding-customer-journey

10 Powerful User Nudges Illustrated

Theory

Small, individual tasks are far less daunting than big ones. The way in which tasks are presented and broken down affects how motivated we are to start and finish them.

Example

Companies like Ryanair offer the entire purchasing process in chunks. They offer you a low ‘seat price’, and the lock you in the process. Getting you excited about your trip. Then, once you have decided where to go, you begin to form a mental commitment. Once this happens, it’s very unlikely that you’ll change your mind about the purchase.

And at this moment they start to add the extra charges in ‘chunks’. By the time you get to the full cost of your flight, you have put so much effort into the booking process, and are so attached to the idea of the holiday, that you would rather complete the purchase at a cost significantly higher than the initial seat price than write off the investment of your time and emotion.

from Sidebar https://sidebar.io/out?url=https%3A%2F%2Fuxplanet.org%2F10-powerful-user-nudges-illustrated-540ce4063f9a

3 best practices for SaaS customer onboarding

If you work in SaaS, chances are you’re familiar with the enablement gap that exists between the signing of a customer contract and their successful adoption. Since adoption is a key metric for successful onboarding, the priority is to close that gap quickly. 

Many companies have created dedicated teams tasked with driving one-to-one touchpoints with customers and getting their teams up-to-speed and enthusiastic about using the product they just acquired. By helping customers take ownership of the tools, we can set a direct course for their first value moment via tech-touch or human-touch interactions. 

While the onboarding specialists provide only one part of the full customer onboarding experience, we’re often a uniquely human interaction along the way. Therefore, it’s critical that we make the most of the opportunity to represent the product and bring it to life for the customer.

At UserTesting, the Customer Onboarding team has spent the past two years designing and iterating on its programming to increase successful product adoption. Today, we want to share the three best practices that we’ve learned along the way:

1. Strike while the iron’s hot

As onboarding facilitators, we often get to provide customers with their first impressions of the product. This is our chance to make the most of their enthusiasm and get users in the product early. Depending on how much of your product enablement is automated (in-product cues, tutorial videos, etc.), make the most of your human-to-human time by collaborating in the product and workshopping on real projects. Keep up the momentum by encouraging teams to try out the tailored use cases you’ve discussed.

2. Have everyone in the room (at just the right times)

Of course, we want everyone to love our product as much as we do, but some conversations are going to be more relevant to certain team members than others. Ensure value for your customers by inviting just the right people to the appropriate touchpoints. Design targeted workshops for your different user groups, including managers and execs. Adoption rises when everyone understands the role they play in the tool.

3. Make space for conversation, collaboration, and contrarians

If you want to facilitate engaging conversations, design your onboarding program for interaction and collaboration. Build working sessions and encourage the customer team to lead the way on their real-world projects. While video tutorials and help center articles are a necessary component for onboarding, discussing questions and concerns in-person will increase the likelihood of retention.

Empathy in onboarding leads to more successful adoption

Whether you work in a dedicated customer onboarding function or in a longer-term customer success role, you have the opportunity to leave a lasting, positive impression of your product by providing a thoughtful, targeted human touch for your customers. 

Allow your marketing campaigns and smart product triggers to drive autonomy, but don’t forget the value of hands-on relationship-building in the early stages of customer onboarding. The connections you make with customers through the onboarding experience may be the difference between successful adoption and renewal or inactivity and customer churn.

Insight for the Experience Economy

Get our best human insight resources delivered to your inbox every month. As a bonus, we’ll send you our latest industry report, The rise of the Experience Economy!

Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

About the author: Sean Treiser is one of UserTesting’s Onboarding Specialists—helping to train customer teams across a variety of industries, shapes, and sizes in user research best practices. Outside of the office, you’ll find him riding bike trails, patronizing sushi bars, and attending just about every concert around Atlanta.

from UserTesting Blog https://www.usertesting.com/blog/3-best-practices-saas-onboarding/

A Guide to MLA Format and Annotations | CloudApp

At one time another, whether for a college writing assignment, to publish a groundbreaking research paper, or when working in a professional environment, you may be asked to write something in the MLA format.

Not sure what the MLA format is? Don’t worry, that’s what this article is about! Keep reading to learn all about this writing structure, how to use it successfully, and how CloudApp can help.

What is MLA Format?

The MLA format is a document organization structure that was developed and popularized by the Modern Language Association; a collection of American scholars that “promotes the study and teaching of literature.”

The MLA format was first introduced in 1951 as a way for all researchers, scholars, students, and others in the fields of language and literature to structure their research papers and assignments. By adopting one uniform approach to structure, the Modern Language Association hoped to make the consumption of information as easy and streamlined as possible.

The MLA format style guide was most recently updated in 2016. This is the eighth version published by the Modern Language Association since the format was first developed.

How to Do MLA Format

Now that we have a shared understanding of what the MLA format is, we can begin discussing how to use it. The following nine simple steps will teach you how to craft a document using this formatting structure — no matter which word processor you choose to use.

  1. Set your document’s page margins to one inch if they aren’t set to this figure already. One inch is standard for all MLA formatted documents.
  2. Choose “Double” for the line spacing of your document. This addition of white space is used to make the consumption of content easier.
  3. Select a clear and easily legible font such as Times New Roman and make sure to set it to size 12, the standard font size for all MLA formatted documents.
  4. Create an appropriate header that includes your full name, your instructor’s name (if applicable), the course name (if applicable), and the date. You’ll also want to ensure that each page is automatically numbered in the upper right-hand corner.
  5. Below the header, type the title of your document and center it. You should use sentence case, which means only the first word of your title and all proper nouns are capitalized.
  6. As you write, be sure to indent each new paragraph ½ inch, leave one space after periods and other punctuation points, and only use italics when absolutely necessary. Do not use bold letters or underline your words.
  7. You should also know that MLA format in text citation is” done via parenthetical citations and is the same regardless of container (i.g. book or article.)” Each MLA format citation should include the author’s name and the page number.
  8. The Modern Language Association also has strict guidelines when it comes to works cited. Fortunately, the good folks at bibme have created an extensive guide. We suggest viewing and following it to the letter.
  9. Finally, once your document is written and structured in the MLA format, you can distribute it. Depending on the purpose of your paper, you may be able to do this digitally. But if a hard copy is needed, use white paper (not off-white or ivory) and choose a high-quality, non-cardstock version for printing purposes.

There you have it! Follow these nine easy steps and you’ll be able to submit all of your research papers, class assignments, and other documents in a perfect MLA format.

MLA Format Examples

It can be difficult to grasp the MLA format without seeing it first hand. With that in mind, we’ve included a couple of MLA format examples for you to view below. Notice how each of the elements previously discussed is included.

Purdue University

Source: Purdue.edu

BibMe (Works Cited Page)

Source: bibme.org

These two examples clearly illustrate what documents in the MLA format should look like. If you’re new to this formatting style, we suggest comparing your final paper to them to ensure you’ve formatted your work in the correct way.

MLA Format Annotations

Now, let’s switch gears a little bit and talk specifically about MLA format annotations. Annotations are important elements of many research papers and other documents that regularly use the MLA format.

In the next few sections, we’ll explain what annotations are, how to use them in the MLA format, and a handy annotation tool you can use to make the entire process smooth and painless.

What is Annotation

Before we get too far ahead of ourselves, we’ll take a quick minute to define what an annotation is. According to ThoughtCo., an annotation is:

“A note, comment, or concise statement of the key ideas in a text or a portion of a text and is commonly used in reading instruction and in research.”

There are many different ways to annotate a document and different best-practices depending on the kind of document being annotated. In the next section, we’ll cover how to annotate in the MLA format that we discussed earlier in this article.

Annotation in MLA Format

The MLA format, unsurprisingly, has specific requirements for annotation. In most cases, these annotations will occur in the bibliography section of any research paper or report you happen to be working on and will be used to cite books, articles, and other resources necessary for research purposes.

There are two types of annotations to be aware of:

  1. Summary Annotations: A summary annotation is exactly what it sounds like: an annotation that summarizes the source cited. These annotations should include the author and topic of the document, its purpose, the date is was published, and how it was made available for public consumption.
  2. Evaluative Annotations: An evaluative annotation includes all the aspects of the summary annotation but also includes an assessment of the work’s accuracy, its relevance to the report, and its overall quality. These kinds of annotations help readers learn whether the document will be useful to them or not.

When writing annotations in the MLA format, there are a few things you’ll want to keep in mind, beyond the type of annotation you’re creating.

  • Annotations should be written in the third person, double spaced, and about three to six sentences long, or 150 – 250 words.
  • If the paper in question required a large amount of research and there are many works that need to be cited, feel free to organize them by topic.
  • Be as objective as possible. If you must share an opinion, explain it. That way the reader can understand where you are coming from.

Here’s one of our favorite annotation techniques examples from Columbia College:

Follow these basic rules and guidelines and you’ll be able to easily write annotations in the MLA format and properly cite the works you used to craft your paper or report.

Annotation For the Future of Work: CloudAppBefore we let you go, we want to address one more thing. Namely, how CloudApp can make the crafting of annotations in the MLA format much more convenient.

Before we let you go, we want to address one more thing. Namely, how CloudApp can make the crafting of annotations in the MLA format much more convenient.

If you’re not familiar with CloudApp, it’s a revolutionary visual communication tool that combines webcam and screen recording, GIF creation, and image annotation features in one powerful and easy to use package that’s sure to boost your productivity. Plus, it’s free to use!

When it comes to modern annotation methods — especially as they pertain to the MLA format — there isn’t a better tool than CloudApp. Keep reading to learn how our tool can be used.

How to Annotate With CloudApp

Annotating your work with CloudApp couldn’t be simpler. Just follow these directions and you’ll be up and running in no time!

For Mac Users

Are you part of the Apple faithful? The following four steps will allow you to easily annotate just about anything — images, reports, research papers… the list goes on!

  1. Download CloudApp and create your free account. Then navigate to the upper right-hand corner of your screen (in the menu bar) and click the CloudApp icon.
  2. In the drop-down menu that appears, select the “Annotation” option. Then, either take a screenshot of your screen, or upload the document you want to add annotations to.
  3. Next, choose the type of annotation you’d like to add. CloudApp has arrows, lines, circles, and emojis available. But to adhere to MLA format guidelines, you’ll want to stick with the text box option.
  4. Annotate your image or document to perfection. When you’re done, click the blue “Save” button and your work will be uploaded to the cloud automatically . From there, you’ll have the ability to email it, send it in Slack chats, attach it to Trello cards, and more.

For Windows Users

If you’re a windows PC user, you can use CloudApp also. The process is largely the same as described above. But we’ve included a separate set of instructions for clarity:

  1. Download CloudApp and create your free account. Then navigate to the lower right-hand corner of your screen (in the Quick Launch icon tray) and click the CloudApp icon.
  2. In the drop-down menu that appears, select the “Annotation” option. Then, either take a screenshot of your screen, or upload the document you want to add annotations to.
  3. Next, choose the type of annotation you’d like to add. CloudApp has arrows, lines, circles, and emojis available. But to adhere to MLA format guidelines, you’ll want to stick with the text box option.
  4. Annotate your image or document to perfection. When you’re done, click the blue “Save” button and your work will automatically be uploaded to the cloud. From there, you’ll have the ability to email it, send it in Slack chats, attach it to Trello cards, and more!

And that’s it! Whether you use a Mac or Windows computer, you can use CloudApp to quickly annotate your work in the MLA format — or any other format you might like to use.

In Conclusion

The MLA format is an important writing structure to understand for anyone publishing research reports or completing class assignments. It’s even used in professional environments from time to time. Fortunately, you now know everything you need to know to use MLA successfully.

What is MLA Format?

The MLA format is a document organization structure that was developed and popularized by the Modern Language Association; a collection of American scholars that “promotes the study and teaching of literature.”

The MLA format was first introduced in 1951 as a way for all researchers, scholars, students, and others in the fields of language and literature to structure their research papers and assignments. By adopting one uniform approach to structure, the Modern Language Association hoped to make the consumption of information as easy and streamlined as possible.

The MLA format style guide was most recently updated in 2016. This is the eighth version published by the Modern Language Association since the format was first developed.

from CloudApp: The Future of Work https://www.getcloudapp.com/blog/mla-format-guide

13 Mind-Blowing Things Artificial Intelligence Can Already Do Today

By now, most of us are aware of artificial intelligence (AI) being an increasingly present part of our everyday lives. But, many of us would be quite surprised to learn of some of the skills AI already knows how to do. Here are 13 mind-blowing skills artificial intelligence can already do today.

13 Mind-Blowing Things Artificial Intelligence Can Already Do Today

Adobe Stock

1.           Read

Is one of your wishes to save time by only having to pay attention to the salient points of communication? Your wish has come true with artificial intelligence-powered SummarizeBot. Whether it’s news articles, weblinks, books, emails, legal documents, audio and image files, and more, automatic text summarization by artificial intelligence and machine learning reads communication and reports back the essential information. Currently, SummarizeBot can be used in Facebook Messenger or Slack and relies on natural language processing, machine learning, artificial intelligence, and blockchain technologies.

2.           Write

Would you believe that along with professional journalists, news organizations such as The New York Times, Washington Post, Reuters, and more rely on artificial intelligence to write? They do, and although this does include many "who, what, where, when, and how" formulaic pieces, AI is able to expand beyond this to more creative writing as well. Many marketers are turning to artificial intelligence to craft their social media posts. Even a novel has even been generated by artificial intelligence that was short-listed for an award.

3.           See

Machine vision is when computers can “see” the world, analyze visual data, and make decisions about it. There are so many amazing ways machine vision is used today, including enabling self-driving cars, facial recognition for police work, payment portals, and more. In manufacturing, the ability for machines to see helps in predictive maintenance and product quality control.

4.           Hear and understand

Did you know artificial intelligence is able to detect gunshots, analyze the sound, and then alert relevant agencies? This is one of the mind-blowing things AI can do when it hears and understands sounds. And who can refute the helpfulness of digital voice assistants to respond to your queries whether you want a weather report or your day’s agenda? Business professionals love the convenience, efficiency, and accuracy provided by AI through automated meeting minutes.  

5. Speak

Artificial intelligence can also speak. While it’s helpful (and fun) to have Alexa and Google Maps respond to your queries and give you directions, Google Duplex takes it one step further and uses AI to schedule appointments and complete tasks over the phone in very conversational language. It can respond accurately to the responses given by the humans it’s talking to as well. 

6. Smell

There are artificial intelligence researchers who are currently developing AI models that will be able to detect illnesses—just by smelling a human’s breath. It can detect chemicals called aldehydes that are associated with human illnesses and stress, including cancer, diabetes, brain injuries, and detecting the "woody, musky odor" emitted from Parkinson’s disease even before any other symptoms are identified. Artificially intelligent bots could identify gas leaks or other caustic chemicals, as well. IBM is even using AI to develop new perfumes.

7. Touch

Using sensors and cameras, there’s a robot that can identify "supermarket ripe" raspberries and even pick them and place them into a basket! Its creator says that it will eventually be able to pick one raspberry every 10 seconds for 20 hours a day! The next step for AI tactile development is to link touch with other senses.

8. Move

Artificial intelligence propels all kinds of movement from autonomous vehicles to drones to robots. The Alter 3 production at Tokyo’s New National Theatre features robots that can generate motion autonomously. 

9. Understand emotions

Market research is being aided by AI tools that track a person’s emotions as they watch videos. Artificial emotional intelligence can gather data from a person’s facial expressions, body language, and more, analyze it against an emotion database to determine what emotion is likely being expressed, and then determine an action based on that info.  

10. Play games

It’s not all serious business with artificial intelligence—it can learn to play games such as chess, Go, and poker (which was an incredible feat)! And, turns out that AI can learn to play these games plus compete and even beat humans at them!

11. Debate

IBM’s Project Debater showed us that artificial intelligence can even be successful at debating humans in complex subjects. Not only is it able to research a topic, but it can also create an engaging point of view and craft rebuttals against a human opponent.

12. Create

Artificial intelligence can even master creative processes, including making visual art, writing poetry, composing music, and taking photographs. Google’s AI was even able to create its own AI “child”—that outperformed human-made counterparts.

13.   Read your mind

This is truly mind-boggling—AI that can read your mind! It can interpret brain signals and then create speech. Impressive and life-changing for those with speech impairment, but a little bit unnerving when you consider the mind-reading aspect of the skill. It’s no surprise that some of the biggest tech giants, including Facebook and Elon Musk have their own projects underway to capitalize on AI’s mind-reading potential.

Now imagine if these 13 mind-blowing skills were all combined into one super artificial intelligence! Frightening or exhilarating? That’s up to you to decide, but I think we can all agree, it would be mind-blowing!

from Forbes https://www.forbes.com/sites/bernardmarr/2019/11/11/13-mind-blowing-things-artificial-intelligence-can-already-do-today/

8 Tips for Perfect Dark Theme UI

Is Dark Mode just another trend or something we need to implement on designs as soon as possible?
The idea of using Dark modes is to conserve mobile phone and tablet energy by asking screen pixels to fire less brightly and to reduces eye strain for users, that’s true just not entirely true.

1⃣ Google claims that black requires far less power to display on-screen than white and that might be true but when it comes to user-experience that really depends on the product and conditions:

2⃣ According to some studies (white text/black background vs white background/black text) by Susanne Mayr “participants were better performing in the positive polarity condition”.

3⃣ Another study from Sally Augustin shows that: “..brightness and colors can definitely provoke emotion, so muting out an app’s appearance can make it harder to connect with users.”

So what’s the thing with Dark Modes, with other words we like it, it’s refreshing, looks clean, different and new.

Should products include dark mode:
-Yes, users are asking for it so why not catch up with this “trend”.

Which is the best way to implement Dark Mode:
-Dark Mode can be automatically activated on night hours
-Dark Mode should always be provided as an option

The benefits are:
-It is good for tired eyes
-It is visually appealing
-Feels refreshing

🔼 When to use it:
-When dark goes along with brand colors
-To reduce eye strain
-To support visual hierarchy
-To save energy, on pages that are used for long periods of time

In my opinion, for a better understanding try to compare Light/Dark Mode with the lights on your room, sometimes turning the light feels relaxing, but there is always a switch to turn it back on.

Here is a great summary for a Redditor:
-Night is dark. Screen is bright. Eyes hurt.
-Night is dark. Screen is dark. Eyes not hurt.

1. Avoid Pure Black/White

100% color brightness and black have 0% color brightness causes the user’s eyes to work harder to adapt to the brightness.

2. Avoid Saturated Colors

Saturated colors don’t pass WCAG’s accessibility standard. Less saturated colors from your color palette improve legibility and reduce visual vibration.

3. Accessibility and Contrast

Dark theme surfaces must be dark enough to display white text, ensuring that body text passes WCAG’s AA standard.

4. Communicate depth

The more elevated the surface is, the stronger and brighter the overlay becomes.

5. Don’t convert Light to Dark

Just converting colors from the light to the dark variants doesn’t produce a good result. Large surfaces use a dark surface color.

6. Low-Contrast grays on Imagery Products

Use dark gray rather than pure black, because that saturation contrasts well against bright pixels to better express colorful photos.

7. Text Opacity = Emphasis

  • The high-emphasis text should have an opacity of 87%
  • Medium emphasis text is applied at 60%
  • The disabled text uses an opacity of 38%.

8. Avoid Shadows

Shadows on dark themes don’t serve the same purpose as they do on light themes, also they don’t look good.

“Writing articles for people who are in a rush, I value your time, and probably like me, you like reading on the go. Keeping it short so you don’t have to skip any paragraph.”

❓Do you have any questions? Let me know:
Instagram — Linkedin — Behance — Dribbble


8 Tips for Perfect Dark Theme UI was originally published in Prototypr on Medium, where people are continuing the conversation by highlighting and responding to this story.

from Prototypr https://blog.prototypr.io/8-tips-for-perfect-dark-theme-ui-5aa34784784e?source=rss—-eb297ea1161a—4

The power of framework design

This map was not created to introduce a waterfall process. Rather, it was a powerful anchor throughout the project. Files touch every part of Dropbox, so we needed to work across the organization. Our roadmap artifact provided transparency in our approach. At any given moment, we could pinpoint where we were in our process. That helped set expectations and build trust with our collaborators.

It also held us, as designers, accountable for diverging and exploring before we settled on ideas. By scoping a timeline to do so, we carved out the space we needed for high-quality work.

Model before mocks

With any platform project, it’s important that UX and technical planning happen together. So before we got into pixels, we tackled a model that would ground both design and engineering.

Expert Hugh Dubberly defines a model as a hypothesis for how a system could work. At their core, “models describe relationships: parts that make up wholes; structures that bind them; and how parts behave in relation to one another.”

A good model will inform a strong design opinion, because, as Dubberly writes:

Models are not objective. They leave things out. They draw boundaries between what is modeled and what is not; between the system and its environment; and between the elements of the system.

Framing a system—defining it—is editing. What we think of as natural boundaries, inside and outside, are somewhat arbitrary. The people making the model choose what boundaries to draw and where to draw them. That means, they have to agree on the choices.

To start, we diverged on key questions that we wanted our model to answer. Where should our solution live on a spectrum of automatic vs. user-controlled? What core value will our solution offer to end-users? How might we organize a rules-based system that could handle a variable number of file types and jobs to be done?

We grappled with different ways to answer these questions with models. Doing so led to rich discussions that informed a shared point of view about our core elements:

  • Modes: There were too many jobs required of our single file viewer. An all-in-one solution would lead to too much product complexity. Our model would introduce the concept of modes—different views for different jobs.
  • Attributes: There were many details about a file that we needed to account for—things like extensions and types, states of collaboration, visibility settings, and beyond. These details would grow in volume as our technology became more sophisticated. Our model would need to support infinite attributes.
  • Controller: Previews should be contextually aware—smart enough to show the right tools at the right times. We needed technology to determine the right mode to show a user based on attributes. To start, we could set heuristics for our controller to follow. In the long term, our logic would become more elegant with machine learning.

Here’s how we represented these core elements and their relationships at a high level:

from Sidebar https://sidebar.io/out?url=https%3A%2F%2Fdropbox.design%2Farticle%2Fthe-power-of-framework-design

Paradigm Shifts of the Internet: From Portal Sites, Search Engines to Social Networks, What’s Next?

The Pros and Cons of Categories

The categories of portal sites are exactly the visual analogy of libraries, hospitals, or the shelves of bookstores. Every single webpage was indexed under one or more appropriate categories, just like a book was put into one category of the library according to its topic.

Since information could be accessed only through the category, there is a big problem: if we don’t know how the specific info is categorized, we could not access the info.

It is just like this: you are sick, but you have no idea which department of a hospital you should go to, so you have to ask someone who knows that for help, and finds you the correct department to see the doctor.

Another problem: if there were some all-new information which seems not fit into current categories, it may not be accessed easily too. We have to fix the category first, and that costs.

Today you would say: Why not just search? Exactly, a search is the answer to solve the problems created by categories, but not at that time. The computer hardware was not good enough to index so large data from the fast-growing Internet, and computer science and software technology was yet to be so matured to provide great search experience.

And there came Google, a small company dedicated to search, who successfully solved the hardware and software problems, then be the most powerful tech giant on the Earth.

The 2000s: The Search Era

It was the superior technology brought by Google to push the first paradigm shift of the Internet, from the paradigm of a portal to a new one, the paradigm of search.

By putting some keywords in the search box, people could access the information on the Internet more directly, without categories in the middle. So the distance between information and human has been dramatically reduced to a whole new level. That’s a huge leap forward, a true paradigm shift.

The winners of the past, the portal sites, failed so fast under the new paradigm. Some were acquired by competitors, some were merely discontinued. Just like dinosaurs extinct overnight. Today we are so familiar with their tragic stories.

Lack of Personal Info

But again, did search brought information together with human perfectly? Not yet. There was still a discrepancy between information and each person. Search engines simply did not have enough information about you. They did not know your preference, so they could not give you answers that best satisfy your queries.

Say, you are hungry, looking for something delicious, so you type “cuisine” in a search engine, waiting for a good recommendation. There is a problem. The definition of cuisine is very personal, each individual has his/her preference on food, and search engines simply did not know about that. So in the beginning, all they can do is to give you a series of very generalized answers: any webpages containing the keyword cuisine or other info alike with a high ranking score.

The generalized answers won’t satisfy us, so we provide more keywords, narrowing down the search results, repeat and repeat until we got something right.

Lack of personal information is a big weakness of the search paradigm, so search engines have to develop other means to gather personal information for each user as much as they can, to make sure the best user experience served. That’s why Google keeps every search history for further analysis, and that’s why Google has so many subsidiaries other than search: maps, email, news, calendar, translation, even an open-source mobile operating system, and so on. Google just needs so much data about you, or it has to guess.

The rule is simple: Who gets more personal info could further reduce the distance between info and individual users, and provide better user experience, then be the winner of the next paradigm.

That is social networks.

Today: Age of the Social Networks

The social networks effectively reduced the distance between information and individuals to an unprecedented level, because they have way more personal info provided by users themselves. It knows us even much more than ourselves.

Take Facebook as an example. Every time we logged in Facebook and interact with the UI, we are virtually providing all info about us. Facebook carefully measures and monitors how we use it, looks at what we like or comment or share on specific posts, what we skip, where we are, who and what are in the photos we upload, and so on. The data they have are so abundant, first-hand, fresh, so they can pick up the most attractive contents (and ads) via their sophisticated algorithm, and continuously iterate to improve.

Another paradigm shift happened, driven by the powerful new technology of social networks, by further reducing the distance between information and human.

In this decade, social networks like Facebook, Instagram, Twitter, and YouTube are so successful, enable so many new forms of communication and economy, along with smartphones and ever fast wireless Internet access. Also, they created as much trouble, to name a few, privacy crisis, mis- and dis-information, and the huge threat to the integrity of democracy, both from democracy itself and the outside authoritative countries.

2020 is approaching. What will be the next paradigm? And what are the benefits and impacts it will bring?

To me, the only answer is artificial intelligence.

The Next Paradigm: Machine Learning/Artificial Intelligence

There is one fundamental difference between the paradigm of AI and the past paradigms: while old paradigms reduced the distance between info and human, the paradigm of AI will put info in the front, and put human back.

What do I mean? The past paradigms were to provide the best choices for us, while the AI paradigm chooses for us. Thus, the power of decision making will be in the hand of tech giants and their smart machines, not us.

Take autonomous vehicles as an example. The final stage of developing self-driving cars, the so-called level 5, could have no driver control interface at all. No steering wheel, no acceleration pedal, no brake pedal. You just sit into the vehicle, that’s all. The vehicle will automatically monitor the surroundings, communicate with other cars and traffic control system, then decides the best way to move at the moment.

In short, the car decides how to drive for you. You don’t have to drive, so you don’t decide.

In the coming years, we could expect more and more AI boom in different scenarios. Today most people could not even sense the existence of AI in their everyday life, just because current AI could only be useful in some very narrow or low-level vertical use cases, such as to win over the world champion of GO chess, or to recognize who is who from the images captured by cameras all over the streets, deployed by the surveillance authorities .

Not too long, there must be a very powerful horizontal way to integrate all these vertical AI services, to be a super killer application for every ordinary individual user.

The smart voice assistant could be the one killer app if they get smarter. Once they could deeply understand what we need by casual conversation, they could integrate multiple vertical AI/Non-AI services, and provide the best result directly to us. Human users no longer need to care about how the decision is made, just enjoy the result. Super easy, super convenient.

And the one who could bring this perfect voice assistant to users could be the next superpower among existing tech giants.

Why we exist?

When people no longer have to make decisions, when most of the decisions are made by smart machines much more quickly and correctly than humans, there must be huge impacts. We have seen the approaching crisis of unemployment by AI/non-AI on low-skills or highly repetitive jobs now, and we could expect more jobs with even higher skills or non-repetitive to be replaced by smart machines.

Besides the huge impacts on society as a whole, I think the most fundamental question under the new paradigm will be: why we human beings exist? What is our value, to oneself or ourselves?

We still have some years to find out our answers to the question.

from The Startup – Medium https://medium.com/swlh/paradigm-shifts-of-the-internet-from-portal-sites-search-engines-to-social-networks-whats-next-54f75abf1620?source=rss—-f5af2b715248—4