As Silicon Valley Turns Attention to Race, Black Entrepreneurs Detail Prejudice

When Matt Joseph was raising money a few years ago for a startup he founded, Locent, a venture capitalist asked Joseph, who is black, if he had considered creating a record label instead of a tech company. 

In a different pitch meeting, a group of investors told Joseph he reminded them of Barack Obama and suggested he get into politics—a comment he felt was “racist and insulting,” even though he thinks they intended it as a compliment. 

from The Information https://www.theinformation.com/articles/as-silicon-valley-turns-attention-to-race-black-entrepreneurs-detail-prejudice

Anifa Mvuemba Staged An Instagram Fashion Show With A Difference

Privacy Preference Center

We process user’s data to deliver content or advertisements and measure the delivery of such content or advertisements, extract insights and generate reports to understand service usage; and/or accessing or storing information on devices for that purpose. Below you may read further about the purposes for which we process data, exercise your preferences for processing, and/or see our partners.

More information

Manage Consent Preferences

Always Active

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.    You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site.    All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.

These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages.    If you do not allow these cookies then some or all of these services may not function properly.

These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites.    They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

These cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit.    If you do not allow these cookies you may not be able to use or see these sharing tools.

The storage of information, or access to information that is already stored, on your device such as advertising identifiers, device identifiers, cookies, and similar technologies.

The collection of information about your use of the content, and combination with previously collected information, used to measure, understand, and report on your usage of the service. This does not include personalisation, the collection of information about your use of this service to subsequently personalise content and/or advertising for you in other contexts, i.e. on other service, such as websites or apps, over time.

The collection and processing of information about your use of this service to subsequently personalise advertising and/or content for you in other contexts, such as on other websites or apps, over time. Typically, the content of the site or app is used to make inferences about your interests, which inform future selection of advertising and/or content.

The collection of information, and combination with previously collected information, to select and deliver content for you, and to measure the delivery and effectiveness of such content. This includes using previously collected information about your interests to select content, processing data about what content was shown, how often or how long it was shown, when and where it was shown, and whether the you took any action related to the content, including for example clicking on content. This does not include personalisation, which is the collection and processing of information about your use of this service to subsequently personalise content and/or advertising for you in other contexts, such as websites or apps, over time.

The collection of information, and combination with previously collected information, to select and deliver advertisements for you, and to measure the delivery and effectiveness of such advertisements. This includes using previously collected information about your interests to select ads, processing data about what advertisements were shown, how often they were shown, when and where they were shown, and whether you took any action related to the advertisement, including for example clicking an ad or making a purchase. This does not include personalisation, which is the collection and processing of information about your use of this service to subsequently personalise advertising and/or content for you in other contexts, such as websites or apps, over time.

from British Vogue https://www.vogue.co.uk/fashion/article/hanifa-anifa-mvuemba-digital-fashion-show

14 Data-Backed Strategies to Boost Twitter Engagement

Number 4: Use standalone graphics to share information

Photo by Yucel Moran on Unsplash

I love Twitter.

It’s one of my favorite places on the Internet, and one of the few sites I visit more than once per day. There’s so much to see and do that even when I’m not actively tweeting, it’s safe to say I’m reading, clicking links, and favoriting things to look at again later. As far as social media marketing goes, Twitter is one of the major players.

With over 330+ million monthly active users on the site, it’s easy to see why Twitter is one of the platforms most businesses and marketers could benefit from joining. But here’s the big question: How do you get some of those 330 million monthly active users to engage with your posts and click on your content?

While building followers and social media presence take time, there are strategies and techniques you can use to increase your Twitter engagement quickly, which in turn can lead to more clicks, increased brand awareness, and more conversions.


Official Twitter Account of Netflix

In order for your audience to engage with your posts and click on your content, they need to actually see what you’re posting. One of the best ways to make your content stand out is by adding an image.

Convince&Convert found that sharing images on Twitter increases retweets by 150%. Including a large image with a short summary of text on Twitter is more visually appealing than a text-only post.

In fact, AdWeeks’ research shows that users engaged at a rate of 5x higher when an image was included.


The first, and perhaps most obvious, reason to share your content more than once is to drive more traffic that the initial share. Tom Tunguz did an experiment on his own blog to show how reposting the same content helped him to boost traffic.

To get an idea of how many people were seeing and sharing his posts, Tom looked at the number of Retweets he got when Tweeting a link to one of his blog posts. We can assume from this that actual visits to his posts increased with each Retweet, as well.

Below is a chart of the average number of retweets a post receives the first, second, and third time he tweets it. On average, each subsequent attempt gains about 75% of the previous number of retweets, a very encouraging metric.

Image Source

Larry Kim’s Twitter Account

Do you have an awesome Tweet you want all of your followers to see? You can pin tweets to your profile that will remain on the top of your page until you removed the pin.

  • Pin a Tweet that has an eye-catching image to boost retweets by 35%.
  • A pinned tweet is similar to an ad, except you don’t have to pay for it. Take advantage of this free advertisement by having a strong call to action.
  • Include a URL in your pinned tweet to receive 86% more engagement.

By now, we all know that using images in tweets will increase engagement, but what about adding an image without any links? A standalone graphic is an image that gives useful information on its own without needing a link back to something.

Official Twitter Account of HubSpot

Research shows that posting a standalone graphic with a quote increases retweets by 19%.

  • Use a graphic with text that gives quick information to your followers.
  • Grab small pieces of information from your blog posts to put in your standalone graphic.
  • Use tools like Canva to create a standalone graphic.

Wendy’s Twitter Account

Research shows that “customer emotions become permanent with time. It’s best for an effective intervention to take place as close to the experience as possible,” says Baba Shiv, Stanford Professor of Marketing at Stanford’s Graduate School of Business.

When users respond to your posts on Twitter or mention your business name, respond quickly. If you don’t, then it’s going to seem like you’re not very active on your own Twitter page, or you simply don’t care.

It’s been found by Search Engine Watch that 65% of surveyed Twitter users expect a response from brands they reach out to on Twitter, and of those users, 53% want that response in under an hour so it’s your priority to respond as fast as possible.

Image Source
  • Your followers will be much more likely to post comments if they know that you are reading them and that you will respond to them in a helpful manner.
  • Use first names when you are responding. Addressing your followers in this way makes them feel more appreciated, not to mention that people love to be acknowledged. Small gestures like these help to build loyalty.
  • If you respond right away, there’s a chance that the conversation will continue since they may still be on your page.

Official Twitter Account of Better Marketing

Are you only tweeting your own content? Sharing the content of others is one of the best ways to show that you’re not all about you and that you value the work of others enough to share it on your own Twitter feed.

One report found that 82% of marketers curate content. Curating content is a widely known tactic for a reason because it works. Over 50% of marketers that curate content indicate that it has increased their brand visibility, thought leadership, SEO, web traffic, and buyer engagement.

Next time you post on Twitter, mix in a few curated pieces along with your usual content to increase engagement among your followers to expand their experience.


Official Twitter Account of Heinz Ketchup

Twitter didn’t invent the hashtag (#), but it certainly popularized it with the masses.

According to Quicksprout, hashtags are proven to double your engagement rate and help users to easily search for a topic or trend on Twitter.

Image Source

Hashtags identify the subject of your content, making it easier for Twitter users to stumble across your page when searching for similar subjects.

Because of this, using hashtags is an incredible way to boost social media engagement among both followers and non-followers since you’re making your content more visible.


Tasty’s Official Twitter Account

While images perform better than text, Twitter users love videos, especially on mobile. According to Research Now, The majority of Twitter users (82%) watch video content on Twitter and most watch on a hand-held screen.

Twitter users value discovering video content. Twitter is all about discovering relevant and interesting content at the moment. So it’s not surprising that 41% of users believe that Twitter is a great place to discover videos.

People also turn to Twitter to be in the know. In fact, Twitter users are 25% more likely than the average consumer to discover video first among their friends.

An analysis found that native video on Twitter drives more overall engagement than third-party videos shared on Twitter: 2.5x replies, 2.8x Retweets, and 1.9x Favorites.


The number of people you engage with is directly related to followers viewing your tweets. In order to maximize engagement, you must post at the best possible times.

According to Coschedule’s research, the best times to post on social media, studies show a few ways to make every tweet get a little more engagement:

Image Source

Use these times as a guideline. Test out different times and find out when your audience is the most active on Twitter. You can also check out Twitter Analytics to see when your specific audience is most active.


When you interact with a major player in your industry, it can help get some eyes on you. On Twitter, even talking about or tagging an industry leader or peer can be enough to get extra eyes and engagement on your post.

Image Source

Whether you start a conversation with them directly or just write a post and tag them in it (in a way that makes sense, like by sharing their content or saying you liked their product), they may notice and engage. Especially if they respond or retweet your content, you can continue to get higher levels of engagement if their audience is active.

Again, this can also help build relationships with big names in your industry, and they could be more inclined to share some of your content or posts later on, likely helping you to get more engagement when they do.


The research found that one in four of those you thank will follow you back. But there’s more to it than that: gratitude also boosts your Twitter engagement exponentially.

Buffer’s research found the following, based on their 50 “thank you for sharing” tweets:

  • 26% (13/50) of the people they thanked, favorited their tweet (recognized their thanks)
  • 30% (15/50) of the people they thanked replied to them
  • 26% (13/50) of the people they thanked followed them
  • 24% (12/50) of the people they thanked engaged with them in more than one way (both replied and followed them, both favorited and followed them, etc.)

Image Source

What does this data show? Basically, if you look for people who are sharing your content, and offer them a simple “thanks” or express your gratitude, you have a better chance of them following you.


Since Twitter only has so many characters, it only makes sense to use abbreviated, shortened links to the content you’re posting. You don’t need your whole website address to be listed as long as users are clicking.

Image Source

Using shortened links will give you more characters, your Tweets will look cleaner, and it can increase retweets.


Coca Cola Official Twitter Account

Instead of having a plain block of text, consider breaking up the text with an emoticon. Emoticons show a certain element of playfulness that provides your brand with a bit of personality.

In fact, statistics show that using emoticons will boost the share and comment rate of your posts, and can help increase favorites by as much as 57%.

  • Choose one or two emoticons per tweet. “Think of emojis as the ultimate elevator pitch for your business: you have one or two symbols to let people know exactly what value you’re bringing them with every tweet.”
  • Use emoticons when responding to your followers to add a personal touch and show appreciation.
  • Emphasize specific parts of your tweet with a correlating emoticon to intensify engagement.

This one might fall under the title of “common sense” for many of you, so it’s great to see that there’s data to back up the claim. Social media analytics company Beevolve analyzed 36 million Twitter profiles and 28 billion tweets to find the correlation between tweet frequency and twitter followers. The results (as you might have guessed): Those who tweet more have the most followers.

Specifically:

  • A Twitter user who has sent 1 to 1,000 tweets has an average of 51 to 100 followers
  • Users who have tweeted more than 10,000 times are followed on average by 1,000 to 5,000 users
  • It’s estimated that a person with more than 15,000 tweets has between 100,000 to 1 million followers

Image Source

I think it’s important to keep a few things in mind with this data:

  • Lots of tweets equal lots of activity. And the more active you are on social media, the more likely you are to gain followers, make connections, and build relationships.
  • Lots of tweets equal lots of experience. As you tweet more, you get better at tweeting.
  • Lots of tweets equal longevity. It makes sense to think that that the longer you’re around on social media, the more time and opportunity you’ll have to grow your followers. Posting 10,000 updates would mean a years’ worth of 27 posts daily. You’d deserve all the followers you get at that awesome pace!

All of these strategies are free, and only require a little extra time and an adjustment to the content you may already be creating for Twitter.

There are 300+ million monthly active users, after all, you just have to find the right strategies to get them to interact with you and your content.

Look at engagement as a stepping stone that leads followers to view your website, subscribe to your newsletters, and purchase your product. Without engagement, you will be tweeting to a black abyss. Building a strong relationship with your Twitter followers will bring forth high engagement.

Although data and studies can help steer you in the right direction, the best way to figure out what works best for you and your audience is to test out new strategies and tactics. Remember, all audiences are different. Try out different images, texts, A/B tests, and make data-driven changes.

from Medium https://medium.com/better-marketing/14-data-backed-strategies-to-boost-twitter-engagement-8950d77017e1

Bayesian A/B Testing: A More Calculated Approach to an A/B Test

What are some of the reasons you run an A/B test?

When I think of the benefits of A/B testing, I think of one of the most popular and concrete ways to experiment with ad designs that are effective for target audiences. I think of how changing one simple element can be the deciding factor for customers, and that running a test will help me figure out the preferred design.

Up until recently, I thought that there was only one kind of A/B test. After all, the definition itself is pretty straightforward.

Then, I came across a different kind of A/B test. This method still involves testing variants to discover the preference of an audience, but it involves more calculation, and more trial and error.

This method is called Bayesian A/B testing, and if you want to take a more specific, tactical approach to your ad testing, this might be the answer.

But first, let’s talk about how Bayesian A/B testing is different from traditional A/B tests.

Bayesian A/B Testing

There are two types of A/B tests: Frequentist and Bayesian.

Every A/B test has the same few components. They use data, based on a metric, that determines variants A and B. For example, a metric can be the amount of times an ad is clicked. To determine the winner, that metric is measured statistically.

Let’s apply this to an example of using the frequentist, or traditional, approach. In this scenario, you would design two ads and change one variable, such as the copy of the ad. Then, pick the metric, like the amount of times an ad is clicked.

The winner of the frequentist A/B test in this example would be which ad was clicked the most by your target audience based solely on results from that experiment.

If you were to illustrate these components in a Bayesian A/B test, you would approach the test using different data.

That definition can sound a little difficult to visualize without an example, so let’s go over one.

If your previous ad on Facebook drew 867 unique visitors and acquired 360 conversions, earning a 41% conversion rate, you would use that data to inform an expectation. If you were to figure that your next Facebook ad reached 5,000 unique visitors, you could infer that you’d earn 2,050 conversions based on that prior experience. This would be variant "A."

Let’s say you look at a similar Facebook ad’s performance and ultimately earned a 52% conversion rate. This is variant "B." What you have done by collecting the data from the two variants is to calculate the posterior distribution, and the previous tests you’ve run have now become the ground for your Bayesian test.

If, before calculating the posterior distribution, you had inferences about conversion rates earned from each variable, you can now update them based on the data you’ve collected. You can ask hypothetical questions about your test, such as "How likely is it that ‘B’ will be larger than variant ‘A’?" In this case, you can infer that the answer to this is 9%.

Then, the trial and error portion begins.

Bayesian methodology makes decisions by doing some inference. You can calculate expected loss by the rate your metric decreases when choosing either variable. Set a boundary, such as 2%, that the metric should drop below. Once you have collected enough data to support that a variant dropped below 2%, you’ll have your test winner.

Because your inferred loss for a variant is the average amount of what your metric would decrease by if you chose said variant, your boundary should be small enough to comfortably suggest making a mistake that large.

The methodology suggests that you are more willing to make a mistake of a certain amount, then move on to a more refined experiment instead of wasting time on a mistake that dropped below that threshold.

If you were to run two experiments, they would stop when the expected loss is below that 4% boundary. You would use the values of your variants to calculate your average loss. Then, you would begin the test again using these values as your prosperity distribution.

Bayesian A/B testing proves that you can make a business decision that won’t fall below that boundary you set. You can use the data you’ve collected to continuously run tests until you see metrics increase with each experiment.

When you use Bayesian testing, you can modify the test periodically and improve the results as the test runs. Bayesian A/B testing uses constant innovation to give you concrete results by making small improvements in increments. You don’t have to use inference as a result, but instead, use it as a variant.

If you’re running A/B tests on software or different channels, you don’t have to change them to run a Bayesian A/B test. Instead, you can look at the tools you have at your disposal in that software to give you more calculated results. Then, you can continuously run those tests and analyze them to pick your winners.

You might use a Bayesian A/B test instead of a traditional A/B test if you want to factor in more metrics into your findings. This is a really good test to calculate a more concrete ROI on ads. Of course, if you have less time on your hands, you can always use a frequentist approach to get more of a "big picture" conclusion.

Whichever method you choose, A/B testing is popular because it gives you an inference that can be useful for you in future campaigns.

from Hubspot https://blog.hubspot.com/marketing/bayesian-ab-testing

Kinetic Typography with Three.js

Kinetic Typography may sound complicated but it’s just the elegant way to say “moving text” and, more specifically, to combine motion with text to create animations.

Imagine text on top of a 3D object, now could you see it moving along the object’s shape? Nice! That’s exactly what we’ll do in this article, we’ll learn how to move text on a mesh using Three.js and three-bmfont-text.

We’re going to skip a lot of basics, so to get the most from this article we recommend you have some basic knowledge about Three.js, GLSL shaders, and three-bmfont-text.

Basis

The main idea for all these demos is to have a texture with text, use it on a mesh and play with it inside shaders. The simplest way of doing it is to have an image with text and then use it as a texture. But it can be a pain to figure out the correct size to try to display crisp text on the mesh, and later to change whatever text is in the image.

To avoid all these issues, we can generate that texture using code! We create a Render Target (RT) where we can have a scene that has text rendered with three-bmfont-text, and then use it as the texture of a mesh. This way we have more freedom to move, change, or color text. We’ll be taking this route following the next steps:

  1. Set up a RT with the text
  2. Create a mesh and add the RT texture
  3. Change the texture inside the fragment shader

To begin, we’ll run everything after the font file and atlas are loaded and ready to be used with three-bmfont-text. We won’t be going over this since I explained it in one of my previous articles.

The structure goes like this:

init() {
  // Create geometry of packed glyphs
  loadFont(fontFile, (err, font) => {
    this.fontGeometry = createGeometry({
      font,
      text: "ENDLESS"
    });

    // Load texture containing font glyphs
    this.loader = new THREE.TextureLoader();
    this.loader.load(fontAtlas, texture => {
      this.fontMaterial = new THREE.RawShaderMaterial(
        MSDFShader({
          map: texture,
          side: THREE.DoubleSide,
          transparent: true,
          negate: false,
          color: 0xffffff
        })
      );

      // Methods are called here
    });
  });
}

Now take a deep breath, grab your tea or coffee, chill, and let’s get started.

Render Target

A Render Target is a texture you can render to. Think of it as a canvas where you can draw whatever is inside and place it wherever you want. Having this flexibility makes the texture dynamic, so we can later add, change, or remove stuff in it.

Let’s set a RT along with a camera and a scene where we’ll place the text.

createRenderTarget() {
  // Render Target setup
  this.rt = new THREE.WebGLRenderTarget(
    window.innerWidth,
    window.innerHeight
  );

  this.rtCamera = new THREE.PerspectiveCamera(45, 1, 0.1, 1000);
  this.rtCamera.position.z = 2.5;

  this.rtScene = new THREE.Scene();
  this.rtScene.background = new THREE.Color("#000000");
}

Once we have the RT scene, let’s use the font geometry and material previously created to make the text mesh.

createRenderTarget() {
  // Render Target setup
  this.rt = new THREE.WebGLRenderTarget(
    window.innerWidth,
    window.innerHeight
  );

  this.rtCamera = new THREE.PerspectiveCamera(45, 1, 0.1, 1000);
  this.rtCamera.position.z = 2.5;

  this.rtScene = new THREE.Scene();
  this.rtScene.background = new THREE.Color("#000000");

  // Create text with font geometry and material
  this.text = new THREE.Mesh(this.fontGeometry, this.fontMaterial);

  // Adjust text dimensions
  this.text.position.set(-0.965, -0.275, 0);
  this.text.rotation.set(Math.PI, 0, 0);
  this.text.scale.set(0.008, 0.02, 1);

  // Add text to RT scene
  this.rtScene.add(this.text);
 
  this.scene.add(this.text); // Add to main scene
}

Note that for now, we added the text to the main scene to render it on the screen.

Cool! Let’s make it more interesting and “paste” the scene over a shape next.

Mesh and render texture

For simplicity, we’ll first use the shape of a BoxGeometry together with ShaderMaterial to pass custom shaders, time and the render texture uniforms.

createMesh() {
  this.geometry = new THREE.BoxGeometry(1, 1, 1);

  this.material = new THREE.ShaderMaterial({
    vertexShader,
    fragmentShader,
    uniforms: {
      uTime: { value: 0 },
      uTexture: { value: this.rt.texture }
    }
  });

  this.mesh = new THREE.Mesh(this.geometry, this.material);

  this.scene.add(this.mesh);
}

The vertex shader won’t be doing anything interesting this time; we’ll skip it and focus on the fragment instead, which is sampling the colors of the RT texture. It’s inverted for now to stand out from the background (1. - texture).

varying vec2 vUv;

uniform sampler2D uTexture;

void main() {
  vec3 texture = texture2D(uTexture, vUv).rgb;

  gl_FragColor = vec4(1. - texture, 1.);
}

Normally, we would just render the main scene directly, but with a RT we have to first render to it before rendering to the screen.

render() {
  ...

  // Draw Render Target
  this.renderer.setRenderTarget(this.rt);
  this.renderer.render(this.rtScene, this.rtCamera);
  this.renderer.setRenderTarget(null);
  this.renderer.render(this.scene, this.camera);
}

And now a box should appear on the screen where each face has the text on it:

Looks alright so far, but what if we want to repeat the text many times around the shape?

Repeating the texture

GLSL’s built-in function fract comes handy to make repetition. We’ll use it to repeat the texture coordinates when multiplying them by a scalar so it wraps between 0 and 1.

varying vec2 vUv;

uniform sampler2D uTexture;

void main() {
  vec2 repeat = vec2(2., 6.); // 2 columns, 6 rows
  vec2 uv = fract(vUv * repeat);

  vec3 texture = texture2D(uTexture, uv).rgb;
  texture *= vec3(uv.x, uv.y, 1.);

  gl_FragColor = vec4(texture, 1.);
}

Notice here that we are also multiplying the texture by the uv components so that we can see the modified texture coordinates visually. This helps us figure out what is going on, since there are very few resources for debugging shaders, so the more ways we can visualize what’s going on, the easier it is to debug! Once we know it’s working the way we intend, we can just comment out, or remove that line.

We’re getting there, right? The text should also follow the object’s shape. Here’s where time comes in! We’re going to add it to the x component of the texture coordinate so that the texture moves horizontally.

varying vec2 vUv;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  float time = uTime * 0.75;
  vec2 repeat = vec2(2., 6.);
  vec2 uv = fract(vUv * repeat + vec2(-time, 0.));

  vec3 texture = texture2D(uTexture, uv).rgb;

  gl_FragColor = vec4(texture, 1.);
}

And for a sweet touch, let’s blend the color with the the background.

This is basically the process! RT texture, repetition, and motion. Now that we’ve looked at the mesh for so long, using a BoxGeometry gets kind of boring, doesn’t it? Let’s change it in the next final bonus chapter.

Changing the geometry

As a kid, I loved playing and twisting these tangle toys, perhaps that’s the reason why I find satisfaction with knots and twisted shapes? Let this be an excuse to work with a torus knot geometry.

For the sake of rendering smooth text we’ll exaggerate the amount of tubular segments the knot has.

createMesh() {
  this.geometry = new THREE.TorusKnotGeometry(9, 3, 768, 3, 4, 3);

  ...
}

Inside the fragment shader, we’ll repeat any number of columns we want just to make sure to leave the same number of rows as the number of radial segments, which is 3.

varying vec2 vUv;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  vec2 repeat = vec2(12., 3.); // 12 columns, 3 rows
  vec2 uv = fract(vUv * repeat);

  vec3 texture = texture2D(uTexture, uv).rgb;
  texture *= vec3(uv.x, uv.y, 1.);

  gl_FragColor = vec4(texture, 1.);
}

And here’s our tangled torus knot:

Before adding time to the texture coordinates, I think we can make a fake shadow to give a better sense of depth. For that we’ll need to pass the position coordinates from the vertex shader using a varying.

varying vec2 vUv;
varying vec3 vPos;

void main() {
  vUv = uv;
  vPos = position;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.);
}

We now can use the z-coordinates and clamp them between 0 and 1, so that regions of the mesh that are farther from the screen get darker (towards 0), and those closer to the screen, lighter (towards 1).

varying vec3 vPos;

void main() {
  float shadow = clamp(vPos.z / 5., 0., 1.);

  gl_FragColor = vec4(vec3(shadow), 1.);
}

See? It sort of looks like white bone:

Now the final step! Multiply the shadow to blend it with the texture, and add time again.

varying vec2 vUv;
varying vec3 vPos;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  float time = uTime * 0.5;
  vec2 repeat = -vec2(12., 3.);
  vec2 uv = fract(vUv * repeat - vec2(time, 0.));

  vec3 texture = texture2D(uTexture, uv).rgb;

  float shadow = clamp(vPos.z / 5., 0., 1.);

  gl_FragColor = vec4(texture * shadow, 1.);
}

Fresh out of the oven! Look at this sexy torus coming out of the darkness. Internet high five!


We’ve just scratched the surface making repeated tiles of text, but there are many ways to add fun to the mixture. Could you use trigonometry or noise functions? Play with color? Text position? Or even better, do something with the vertex shader. The sky’s the limit! I encourage you to explore this and have fun with it.

Oh! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

Hope you learned something new. Till next time!

References and Credits

The post Kinetic Typography with Three.js appeared first on Codrops.

from Codrops https://tympanus.net/codrops/2020/06/02/kinetic-typography-with-three-js/

Typography classification in Augmented Reality

Typography Classification in Augmented Reality

As we are progressing in augmented reality space the complexity of information is increasing with the introduction of more features and functions. Hence the existing typographic rules and structure of information in AR is not sufficient to solve for new variables (learn more).

Now the text is not just limited to static consumption, new challenges like movement, rotation, rendering of text (frame rate, resolution) are bringing up several issues like perspective distortion, distance reading, distortion of letter shapes etc. However, some of these challenges are not new at all, although the context has changed because of the three-dimensional medium. Example: learnings from highway signages typography can be applied to scenarios where you want to convey quick information to a user wearing AR glasses while walking on the road. In this case, directly translating the guidelines might not work perfectly because there are limitations of text rendering, vibrating text, the brightness of displays, and so on. And this is where the typography has to evolve and solve these novel challenges.

1. Anchoring of Information:

Before I jump onto explaining the text classification I would like to explain anchoring of text which controls the behaviour of text in different scenarios.

Anchored to head: In this case, the information moves along with the head movement of the user and always stay in front of him/her.

Anchored to space: The virtual elements are anchored to real-world coordinates in 3D space around the user. Hence the information stays at a particular position and the user sees it only when he/she is looking in its direction.

2. Placement zones

I have divided the user’s view into three regions based on the distance and priority of information that can be displayed in these regions.

Figure 3: The distances are approximations made based on different studies and guidelines for AR displays and ongoing testing by me.

2.1 Heads-up Display (HUD) Region

This region is reserved for UI elements that are anchored to the head of the user and stay in the user’s view no matter where the user is looking (figure 1). It can be used for showing essential information like time, user-controlled notifications etc. similar to the status bar in the smartphones. However, I recommend using this space sparingly for absolutely necessary elements based on the use case. It is recommended not to place objects too close to the user as it results in the accommodation-vergence conflict which causes visual fatigue.

The placement in this region which closer to eyes enables the user to quickly see the essential info by shifting the focus from the real world and fixating on the information in the HUD region. As opposed to head movement if you place the same info below or sides of the main view.

2.2 User Interface (UI) Region

The ideal region where all the main experiences are to be placed for the most comfortable viewing experience. In the UI region, the virtual objects like a browser window can be anchored to both head or space around the user. This is the most interactive of all three regions where user can manipulate and play around with the virtual objects.

2.3 Environment (World) region:

It houses all the elements that are anchored to space on objects which are out of user’s control. Like virtual signages/billboards or location markers which give you info about the real-world objects. The augmented information in this region can extend to infinity but it is totally up to the experience designers to take a call on how far they want to extend the experience. What I mean by that if you are walking wearing AR glasses you can see the info of a few blocks in front of you versus how far you can see. Which is an interesting challenge because the information density increases with distance in the situation discussed above.

3. Classification of type of text

I have been working on the classification below to understand and define different scenarios and the consideration that should be kept in mind while choosing typefaces, setting text and even designing typefaces for different applications. The classification can also help you pick the right type of methods of rendering this video.

Figure 4: Classification table which describes various states and parameters linked to different types of text.

3.1 Text in HUD

In this case, the text sticks to the field of the view of the user and moves with the along with the user’s head movement.

3.2 Text for long reading

It should ideally be placed in the UI region for better reading experience within the range of 5 metres. You can allow the users to move the text to optimize the distance based on their reading preferences.

3.3 Sticky info text

The text which is anchored to the real-world objects (usually in a close range up to a few meters away from the user) that has a fixed orientation. What it means is that it changes position, direction based on how the user interacts with the real world object it is anchored to (micro-level interactions in real-world).

3.4 Signage text

It is similar to the sticky info text which anchored to the real world object but in this case, the user can’t move or change the orientation of the object. The information is anchored to macro-level objects like geo-location, buildings, vehicles etc.

3.5 Responsive text

It can be placed in both UI and environment regions where it changes its orientation (perspective) based on the user’s movement or specific programmed behaviours.

3.6 Ticker text

The term ticker comes from news tickers, the element which has moving text information. Quite a useful method to attract attention and show more information in a small area. Eg: notifications in HUD, showing information in supermarkets.

4. Parameters that make these text types unique

4.1 Viewing angle:

Fixed: the text remains right in front of the user and maintains the angle of view when the user is moving towards or around it. In this case, there is no perspective distortion.

Variable: the text stays in a fixed orientation and viewing angle changes based on the position of the user. Perspective distortion of text is a major issue here.

4.2 Text state:

Stationary: The most familiar way of consuming text is when it is static and right in front of us. Eg. text on shop signage, billboard etc.

Moving: Cases where the text constantly moving. Mostly used to solve issues like showing more info in limited space or to attract user’s attention. Eg. news and share market ticker strips.

4.3 User’s state:

Still: When the user is sitting or standing still at a particular location and consuming the information. Most of the text we consume currently is designed keeping in mind the static state of the user, be it reading a book or reading something on screens.

Moving: With augmented reality expanding to the real world where it becomes part of the day to day life this is a new challenge to make the text consumable in when the user is in motion (walking/running). The closest case we have right now is HUDs in cars.

Know more about Future of Typography here:

Note: All the information in this article is based on my ongoing research and some aspects might change and get updated as I move further. Stay in touch to get the latest updates: Twitter, Medium and LinkedIn

In the coming articles, I’ll share how this classification and the parameters will help you in making type decisions for your AR projects.

This is Part-VI of a series of articles in which I am discussing the typographic aspects of AR. It is based on my ongoing research on typefaces for AR headsets which started as part of my MA in Typeface Design at the Department of Typography and Graphic Communication, University of Reading, UK. The articles will help type designers and interface designers to understand the intricacies of the text in AR, to improve their workflow and design process. Click to read the article I, article II, article III, article IV and article V.


Typography classification in Augmented Reality was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

from UX Collective – Medium https://uxdesign.cc/typography-classification-in-augmented-reality-v1-1-adae7a08d2d?source=rss—-138adf9c44c—4

The Kawaiization of product design

Over the last year or two, I’ve noticed a certain style emerge in brand and product design.

Look at the graphic below and you’ll see it. The colors are soft and muted, the shapes rounded and the typography unobtrusive. It’s what you could describe as clean. It’s approachable. It’s inoffensive. It’s almost… cute.

Zoom out and you’ll notice this particular aesthetic is everywhere.

As a designer, you can choose your response to it. Some, seeing how it’s proliferated in the tech world, may call it unoriginal. Others deem it "design for designers." There’s a hint of truth in all of it. I personally think it may be the most strategic design we’ve seen lately, even at the expense of originality.

The merit of this style is one thing to consider – and there’s no shortage of criticism in our community, if you’re looking for that – but I’m more curious to know: Why is this trend happening? What prompted it? Is it backlash from a previous trend or is there a deeper psychological reason behind it? We could easily dismiss it as the latest design trend, but I think it goes deeper.

The Kawaiization of product design

The word "Kawaii” is a prominent part of Japanese culture. In English, it most closely translates to "cute.” It’s a term used for everything from clothing to food to entertainment to physical mannerisms, to describe something charming, vulnerable, childlike or loveable. As I understand Kawaii, it’s almost more of a feeling than an adjective, a word that defies complete definition.

When a baby’s face makes us smile, or we see a puppy and have an urge to squeeze it, it’s Kawaii. And that positive feeling translates to objects and experiences beyond the classically “cute.” In Japan, the effect is employed to reduce agitation surrounded construction sites. It is capitalized by airlines and Japanese police forces to soften their perception or broaden their appeal.

Kawaii is essentially fulfilling the purpose of design.

Similar to how beauty is a function, Kawaii can be seen as a function. It elicits positive emotions that encourage social interaction. There are countless experimental studies on how the effect of Kawaii promotes calm behavior and narrows your focus. It’s even theorized to have healing power.

Looking at recent trends, it seems that Kawaii has, in some form, reached the West and influenced the way we are designing our digital products. As we move away from the clean yet cold aesthetic of minimalism, we’re adopting the psychological power of cuteness.

Our app designs have become soft, sweet, inoffensive. Bank interfaces use pastels, rounded corners and soft drop shadows to make mundane or unpleasant tasks more "fun.” Animojis have taken over our chats, and our productivity tools are starting to look like Animal Crossing.

We are using Kawaii to make our products more palatable and less transactional. Claymation-style 3D hands imply our design tool is our friend. Circles and squiggles say our form-creation app is here to party. The muted colors and lack of sharp corners signal safety. It is approachable. It is charming. It’s Kawaii.

What we’re seeing in product design may be minimalism evolving, or it may be a response to previous trends. Or maybe it’s our way of dealing with greater societal issues. Studies have suggested that Kawaii, or fashion sub-cultures off-shooting from it, are a way of coping with social pressures and anxiety. Like putting on a mask to ease the pain of reality.

It could be just a trend, or it could be we are becoming more human, more childlike because we’re tired of being grownups. Given the context of the world around us, we are searching for positivity and comfort, and that’s why we add emojis to our spreadsheets.

"Havana" landing page image by Tran Mau Tri Tam.
"Specify" landing page image by Romain Briaux.

from House of van Schneider https://vanschneider.com/the-kawaiization-of-product-design

Twitter Tips: Most Effective Ways to Create Polls via @MattGSouthern

Twitter is back with more examples of good copy versus bad copy when writing tweets, this time with examples related to publishing polls.

Twitter’s Global Creative Lead Joe Wadlington hosts what has now become a monthly video series full of Twitter tips.

Here’s his advice on Twitter polls in the latest installment of Good Copy, Bad Copy.

Twitter Polls – Engaging & Useful

Polls on Twitter can be a valuable source of market research data, but only if they’re utilized strategically.

When it comes to writing copy for polls, marketers have to find a balance between being engaging while also gathering usable information.

Good copywriting comes into play when writing the body of the tweet and also when crafting the poll options.

As explained later on in the examples, it’s easy to make the mistake of writing engaging copy that doesn’t actually produce any useful data.

In that case – it doesn’t really matter how many people engage with the poll if it does not generate anything your company can benefit from in the long run.

Bad Copy for Twitter Polls

Here’s the example provided of bad copy for Twitter polls:

“We’re completely out of ideas! Tell us what to put on our blog next.”

  • Blog posts
  • Videos
  • How-tos
  • Cat videos

There are a number of things wrong with this copy, not the least of which is the negative note it starts out on.

After demanding the audience for responses, it goes on to lead them toward a series of skewed answers.

Blog posts and videos are content formats, while how-tos is a topic that could be either a blog post or a video.

Cat videos is a funny and engaging answer, but it’s engaging to a fault.

Chances are most people will choose cat videos, and you may end up with a bunch of responses on the poll, but you won’t be able to use any of that data.

So that’s the bad copy. Here’s the good copy version.

Good Copy for Twitter Polls

The revised version with good copy reads:

“We want to hear from you! What type of content do you want to see on our blog?“

  • Product how-tos
  • Twitter trends
  • Marketing best practices

Right from the start, this poll begins by soliciting feedback from the audience in a positive way.

“We want to hear from you” shows that you care about what your audience has to say.

Especially when compared with “We’re completely out of ideas!”

The poll then leads users toward choosing from a selection of topics, rather than a mixture of topics and formats.

This is good copy, and in the end it will provide the business with information it can use to improve its blog.

See the full Good Copy, Bad Copy video below.

Transcript:

Twitter is where you go to ask your audience what they want, and polls are a great way to do this. But this is bad copy.

“We’re completely out of ideas! Tell us what to put on our blog next.”

And then each of the answers included in this poll are a little skewed.

Blog posts and videos are a format, whereas how-tos are a topic. And cat videos – well that’s a funny joke answer but everyone’s going to vote cat videos and you won’t learn anything from this poll. This is bad copy.

The good copy version: “We want to hear from you! What type of content do you want to see on our blog?”

Asking questions always stimulates engagement, and each of these answers are something that the poll results can tell you and that you’ll learn from.

Product how-tos, Twitter trends, marketing best practices – this is going to help your team along the way.

It’s good copy.

from Search Engine Journal http://tracking.feedpress.it/link/13962/13488253