Apple has reportedly acquired API integration developer Stamplay for 5 million euros ($5.678 million).
The Rome-based startup won a contest to make the best use of new Visa APIs, according to Venture Beat. While it’s unknown exactly why Apple is interested in Stamplay, the fact that the company has experience in the financial payments industry could be beneficial to the tech giant with Apple Pay, its digital payment service. In fact, it is expected to expand the service with a self-branded credit card next week.
Stamplay co-founder Giuliano Iacobelli has described the company’s focus as “Lego for APIs,” enabling business developers to easily connect both internal and external APIs to apps.
As part of the acquisition, Apple agreed to keep Stamplay’s founders on at the company, although they will now be Apple employees.
Apple has continued to expand Apple Pay, adding new banks and credit unions throughout 2018. Last year, the service popped up for iPhone users in Brazil, Ireland and Spain. It also launched in Poland in June, working with banks including Alior, BGŻ BNP Paribas, BZ WBK, Getin, mBank, Nest Bank, Pekao and Raiffeisen Polbank, with more expected to be added.
In addition, Apple CEO Tim Cook noted that total transactions tripled on a year-over-year basis.
“We believe the availability of Apple Pay at major transit systems has been a key driver of adoption among commuters, and in March, we launched Express Transit with Apple Pay in Beijing and Shanghai — the second- and third-largest transit systems in the world,” Cook said at the time.
And earlier this year it was reported that the service was looking to make inroads in Australia through its dual-network debit cards where users can opt for Visa or EFTPOS, an electronic payments system that is tied to debit or credit cards presented at the point of sale and available only in Australia.
from PYMNTS.com https://www.pymnts.com/apple/2019/apple-payments-api-developer-stamplay/
Developers are a tough market — we always know we could do it ourself, and better.
Designer tools are a very noisy market with a combination of big dollars spent by incumbents such as Adobe, hot VC-backed startups and a muttering of scrappy incomers.
Bridging those two worlds with a genuinely better offering is very, very ambitious. Supernova is a relatively recent Mac tool which does just that (and they just graduated Y-Combinator’s W19 class).
Approaching the Supernova
I first noticed Supernova on Product Hunt when they were #2 product of the day, back in July 2017. With a long background in such tools (see below), I was fascinated but also sceptical. It’s easy to talk big about generating code but very hard to get right. I was deep in non-UI development and didn’t think I needed the tool but liked what I was hearing.
By early 2018 two things had become obvious — I was going to have to put a lot more effort into UI development of my product, and Supernova was convincingly real. I’d been going from paper prototype straight to XCode but needed to play with high-fidelity prototypes of a lot more screens. For the first time, as an amateur designer, I started throwing together screens in Sketch.
Welcome to Hand-off Hell.
In many teams, there’s an uncomfortable designer-developer loop called Hand off. A bunch of static pictures representing the screen have to be completely re-created using programming languages and other tools. The only thing from from the original, carefully-drawn, images that goes directly into the final product are small icons. The bigger your team and product, the more painful this loop. The result is often developers making fine visual design decisions because there’s not the time to go back to a designer.
If you are really unlucky, as a developer, you just get static images. You don’t even get vector art originals in Sketch, where you can go inside and see what measurements were used. I’ve found myself using tools to work out the colours and relative sizes from a bunch of screenshots saved as JPEG images.
Small teams and solo founders need a lot of leverage. Supernova is to UI development as Cloud computing is to backend.
Even working by yourself, going from Sketch drawings to working code i̵s̵ was a manual process. There have been a few tools doing code generation from Sketch, mostly for web, and one native iOS tool that’s since apparently died on the vine. Nothing really accelerated the process, before Supernova.
Supernova Studio is the best, most usable and technically-credible code generation and prototyping tool I’ve seen in twenty years.
You start by bringing in the static screen layouts from Sketch, which are little more than rectangles with occasional words. Quickly, in Supernova, you mark them as different components, buttons, image views, tables of cells. Some of these components are automatically identified (a photograph is obviously some kind of image view). This semantic analysis is continually improving.
You can add interactions, navigation to other screens and animations (discussed more below). Within Supernova alone, the preview lets you see a reasonable representation of the app experience.
Supernova provides built-in translation, using Google Translate. This means at any time you can preview your screens in different languages.
In particular, because you can preview on different devices sizes, you can see how your prose will wrap differently. This is not just about aesthetics — it’s insulting to a user to cut off the description of a feature just because you only test in English.
Dividing and Unifying with Code Generation
Ask any two developers their opinion on tools that automatically generate code and you will likely get three opinions back.
Smart engineers are lazy engineers and appreciate tools which save time building the same skeletons or boilerplate code. We’re used to at least the basics being created with different project types in XCode, Android Studio and other tools.
Taking that a step further to generating an entire, nearly working screen, is where people’s previous bad experiences start to surface. In theory, going from layout to finished code is a relatively mechanical process. In practice, people have their own ways of doing things. This means, for some teams, Supernova’s code generation will be an unused feature. But, even if you have a good reason to ignore it, having compilable code generated means any measurements and layout logic can at least be examined.
Being able to inspect generated code unifies your design and development teams with harmony rather than frustrated questions. Comparing changes in the generated code is also an easy way for developers to see the evolution of a design.
If you’re unified because your design and development teams are residing in the same brain, then a tool like Supernova is a sanity-saver as well as major time-saver. The smaller the team, the more I recommend sticking closely to their generated code and relax without having to learn as much detail about different platforms.
Currently, Supernova generates platform-distinct code for iOS (Swift) and Android (Java or Kotlin) as well as the cross-platform React Native (JavaScript) and new hotness Flutter (Dart). The latter two have been added in the year I’ve been using the tool.
So, as well as being a productivity tool, it becomes an educational one. I can design a trivial user interface with a couple of controls, see the familiar Swift code and iOS resources then compare that to Flutter or Kotlin.
Animation is the new table-stakes in user experience.
Supernova gives you both kinds of animation a modern app needs
The gif below was recorded from the Supernova preview screen and demonstrates two kinds of animation. The project files are available on github, including the original Sketch.
First, there’s the trivial slide-in effect as you change screens. A button on a menu has the Interaction type Navigate to screen.The most basic kind of prototyping experience starts with linking together your screen images with different hotspots
A much more complex animation is built up and triggered on entry to the second screen. I wanted a flight-control kind of illusion of different tools you can use in the Composer to fly into the menu at the bottom. A copy of the icon has three extra shapes overlaid on top of it which move at different rates when the animation starts.
The Supernova team consider the existing animation very much an introductory product. They’ve shared a roadmap showing many features coming, but even these simple property animations were enough for me to design a simple micro-interaction I was happy with.
Remember that Supernova is a full code-generating product, not just a prototyping tool. Many of their competitors stop at letting you visually edit an animation, maybe saving a video to show how it works.
Supernova’s Codex provides live, real-time generated code alongside your visual design, letting you flip between languages to see how animation works in each. The Swift code below has a portion highlighted because I selected the Microphone’s Translate Y property animation. It’s comforting and educational, when you have built up a complex animation, to be able to step through the components and see the matching code.
Why is this guy an authority on tools?
I’ve been an obsessive tool user and maker as a developer for over 35 years. (Let me tell you about the way VAX/VMS let you combine FORTRAN and BASIC subroutines with COBOL…).
In the Classic Mac era, I worked on the two dominant code generation tools. I was contracted to write the Think Class Library code generators for George Cossey’s Marksman (aka Prototyper). Think Class Library (TCL) was a major OO framework, part of the Think C product acquired by Symantec. It was a 2nd generation OO framework, similar to Apple’s MacApp. Greg Dow, the TCL architect, went on to write the dominant C++ framework of the Classic Mac era — Metrowerks’s PowerPlant.
From 1992 to 1999 I worked in close collaboration with Spec Bowers of AppMaker on his code-generation product especially on the PowerPlant generators. I extended his product to integrate my OOFILE database and forms framework. Later, I created a compile-time mapping(PP2MFC) that allowed you to compile PowerPlant frameworks for Windows. I wrote custom code generators for AppMaker to generate Windows apps and we co-marketed these including a booth at Macworld, back when it was the dominant Moscone-filling conference.
My most recent commercial SDK/tooling gig was working at Realm on the Xamarin SDK 2015–2017 (just to let you know I’m not an old dude who’s past it). Oh, I live in the world’s most isolated continental capital city — all the above collaborations have been online from Perth, Western Australia. I reallyunderstand remote work.
Why do we prototype? most of you, dear readers, would agree that design process for any medium, whether physical or digital, cannot exist without prototyping. Prototyping is such an essential part of any design work that we can’t even imagine not having it as part of our workflow.
In fact, it’s so essential, so obvious… that it stopped being discussed as something we suppose to care for and optimize. After all, everyone knows how to prototype, right?
Unfortunately… wrong.
Throughout the last decade, I’ve seen absolutely terrific approaches to prototyping that pushed projects forward in great, initially unforeseen, directions and some truly terrible attempts to prototype, that led projects astray.
To really understand what differentiates bad prototyping from excellent prototyping, we have to explore the reasons for why we prototype in the first place?
In my opinion, there are 3 reasons:
Exploration. To explore variety of design concepts without investing resources into producing the final product
Testing. To test many design concepts with users before any production work starts.
Specification. To show how the product is supposed to work without investing into documentation that describes details of every single feature.
Prototyping cuts costs and increases our understanding of experiences that we’re suppose to build.
So what could go wrong? Apparently plenty.
Changes in the process in the last 10 years
In the past decade, we’ve observed a massive growth in the UX design industry. Teams became bigger, number of available positions went through the roof (in 2017 CBS listed UX as a 9th most attractive job in America!) and design became more valued by executives than ever. Forbes called it the UX Gold rush. According to one of the fathers of modern UX – Jakob Nielsen, we can expect even more growth in the next 25 years.
With all the growth, we also observed changes in the design process. Ten years ago, the process of designing the experience and the process of designing the final graphic design were, more often than not, separated. We used different tools for every part of the process (typically Axure + Photoshop) and quite often – different people. Designers, focused on the prototyping process, would typically work on the interaction between the human and the interface and test and iterate over a low to mid fidelity prototype. The aesthetic part of the work would be a graphic designer’s job.
Over the years, we observed a gradually growing desire to merge these two parts of the process and roles.
One could argue that it makes a lot of sense. As humans we don’t dissect our experiences to separate aesthetics from function, and hence great design needs to embrace both. If a designer is able to efficiently work within the two worlds – why would that unification of positions and processes be bad? Sounds like a great design process.
Unfortunately, more often than not it’s a recipe for disaster.
First of all, it’s really hard to find designers who can efficiently work with both the experience and the aesthetic part of the design process. And while, in my opinion, we should all aspire to build this unification of skills and knowledge in us, it is a process that, industry wide, will take years, if not decades. Patience is a virtue of career growth in a crafty industry like design, but the job market remains as impatient as ever. While universal designers are slowly growing their skills, the demand for them is at its highest. And with that… the design process broke.
Following the demand on the market, designers, who are suppose to work on crafting interactions between human and machine, started to use high–fidelity, vector design tools. Spending more time on crafting all the visual details early on, leaves little to no time for prototyping, So instead of working on the realistic reflection of the experience of users for exploration, testing and specifications, designers started to ship static art boards connected with “hotspots” and links, in a way slideshows, as prototypes. With that:
testing with users became limited,
exploration became rare (sunken cost of graphic design!)
specification for the development – inaccurate.
Hotspot-based slideshows of graphic design mockups are not a fit for any of the reasons to prototype. Yet, they became the dominant process these past few years.
Prototyping and design got sacrificed on the altar of the industry growth.
When hotspot–based prototyping is OK
Alright, let’s catch a breath. The situation is not great, but it’s not all bad neither.
Hotspot-based design slideshows can simulate very basic use cases. Over the years tools started to animate changes between static artboards which can generate a visually attractive animated prototype for couple of simple use cases.
Examples of use cases where hot–spot approach can work well:
simple landing pages
small changes to simple design patterns
quick demos intended only to demonstrate design skills to other designers (Dribbble demos!)
More complicated situations need more a advanced approach to prototyping and more advanced tools. Attempts to fit complex design work into a static, vector design tool with a highly limited prototyping capability ends up as a set of suboptimal user experiences, broken design process (especially on the edge of design and development!) and, at times, completely broken design patterns.
When hotspot–based prototypes break the design process… and the web
How can a simplistic design process, static design tools and hot–spot based prototypes break the web? One could write a book about it.
When a designer focuses on designing the interface with a static, vector design tool (such as Sketch, Figma, XD, Studio…) certain aspects of user experience become simply unavailable. So it happens that many of them are absolutely crucial for the prototyping and testing digital experiences. Limitations of those tools take designers hostage.
Examples? In all these vector design tools:
Users cannot enter any content into a prototype (there are no text fields, checkboxes, radio buttons…)
Content provided by users cannot affect design
Components cannot have interactions that are triggered by user actions or changes in other interactive components
… and many more!
The list can go on and on.Vector design tools limit the ability of designers to emulate the entire, planned user experience. Forget about testing forms and validations or advanced forms of navigation. Vectors tools bring everything to the form of a static image that you can only partially bring to life with hotspots.
And what’s the result? Wrong design decisions, untested prototypes, tension between design and engineering… everything that proper prototyping can solve completely disappeared from the design process. Why? Because of vector design tools and hot–spot prototyping.
Here’s a concrete example that you’ve definitely experienced personally.
You sign up for a web app. You’ve just entered your email address, and now it’s time for password. You’ve entered your choice of a password into an input and you’re ready to submit the information. Just when you expect to move forward, you get an error message: “Password needs more than seven characters, one number and one special character”.
Frustrating, isn’t it?
The security reasons are unquestionable, but why wouldn’t they dynamically check the content of the input and show the validation method before you submitted the information. It’s not technologically challenging, so… why?
Most likely, the form was designed in that way. Because vector design tools don’t allow for state of elements and inputs. So a designer probably put together two static art boards, one with the default state, one with the error state and linked them together with a hotspot. The engineer looked at it and built exactly what she was asked.
And there you have it—the wrong design tool likely led to a horrible user experience. Could that be avoided with written documentation? A conversation? Yes. Most definitely. But when you build a process around a faulty set of tools, you’re increasing the risk of errors.
Processes run by humans break faster than good tools run on… machines.
The future of prototyping is here. And it’s code–based
Let’s get back to the beginning of the post. We want prototyping to empower:
Exploration
Testing
Specification
We know that a combination of a vector design tool and hotspot prototyping is not the answer; it leads to completely broken experiences.. Do we have to come back to the tools we used 10 years ago? No.
Let me show you two prototypes created entirely in UXPin without any coding:
Password with dynamic validation
Unlike vector design tools that are primarily optimized for work with illustrations and icons (the only real use cases for vectors outputted by all these tools), UXPin, has been built from scratch as an interface design tool. To achieve this goal, UXPin is built in the paradigm of code–based design tooling, which gives designers access to the powers of HTML, CSS and JavaScript, without asking them to actually code anything. You can design forms, plan validation (all elements can have states and you can validate users’ input with simple expressions!) and create chains of conditional interactions. All of that in a highly visual design environment that is ready to cover the entire flow.
Welcome back, real prototyping. Sleep well, hotspot limitations! It’s time to improve the web.
Understanding machine learning using simple code examples.
“Machine Learning, Artificial Intelligence, Deep Learning, Data Science, Neural Networks”
You must’ve surely read somewhere about how these things are gonna take away future jobs, overthrow us as dominant species on earth and how we’d have to find Arnold Schwarzenegger and John Connor to save humanity.
With current hype, there is no surprise you might have.
But, what is Machine Learning and what is Artificial Intelligence?
Machine learning is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific taskwithout using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence.
And what is Neural Network?
Artificial neural networks or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.
Yes, machine learning is a subset of Artificial Intelligence, and no, AI does not mean Terminators who’ve spread using the internet and have a hub called Skynet.
Traditional forms of programming rely on a specific set of instructions for the computer in a specific language, which a compiler turns to assembly and machine code which the CPU understands and executes.
So, we decide what computer would give us for output. We’re the brain in this process and we tell computers exactly what to do and when to do it.
That’s the general premise of traditional programming.
Machine learning here is bit different. You use data to train your machine learning model to let the machine make decisions based on the outcomes of this training.
Confused? Here’s an example:
How do you understand any new concept as a human? You see the concept, try to figure out what it’s saying, see some examples and understand how to do similar examples using the same concept.
Right?
This is exactly what machine learning is, except here we give the examples to our model which chunks out the output based on previous outputs found in the data.
Yes, this joke somewhat coarsely represents how machine learning works.
Now that you simply understand the concept of machine learning, let’s get into some simple code examples.
Here, I’ll be using the machine learning library ‘brain.js’ and JavaScript and Node.js.
For this example we’d be using a simple and very small amount of data.
So we have 4 football teams as input, namely 1, 2, 3, 4. Why are these not interesting names and just numbers?
Well, I am not that innovative, blame my un-originality.
So, if the output is 0, that means first team won and if the output is 1 then that means second team won. E.g. input: [1, 3], output: [1] → Team “3" won.
So, now let’s code this.
// Here we used the brain.js library to get a ready neural network const brain = require('brain.js'); const network = new brain.NeuralNetwork();
// Now let's train the data network.train([ { input: [1, 2], output: [1] }, // team 2 wins { input: [1, 3], output: [1] }, // team 3 wins { input: [2, 3], output: [0] }, // team 2 wins { input: [2, 4], output: [1] }, // team 4 wins { input: [1, 2], output: [0] }, // team 1 wins { input: [1, 3], output: [0] }, // team 3 wins { input: [3, 4], output: [0] } // team 3 wins ]);
This code trained your neural network on the basis of your data provided. Now you can get probable output for any team’s winning using machine learning.
And yes, you’ve built yourself a machine learning model which has been trained using your data, which can predict which team would win based on that data.
But of course, real world machine learning can’t rely on 7 lines of input data. Lots of data is used to get desirable results with the maximum accuracy possible.
So let’s get into another example with a larger amount of data.
We’d use this data file for our input data. Naming the file as (data.json).
[ { "text": "my unit test failed", "category": "software" }, { "text": "tried the program, but it was buggy", "category": "software" }, { "text": "i need a new power supply", "category": "hardware" }, { "text": "the drive has a 2TB capacity", "category": "hardware" }, { "text": "unit-tests", "category": "software" }, { "text": "program", "category": "software" }, { "text": "power supply", "category": "hardware" }, { "text": "drive", "category": "hardware" }, { "text": "it needs more memory", "category": "hardware" }, { "text": "code", "category": "software" }, { "text": "i found some bugs in the code", "category": "software" }, { "text": "i swapped the memory", "category": "hardware" }, { "text": "i tested the code", "category": "software" } ]
The above JSON data file has some sentences and a category has been allocated to it.
Our machine learning model will take a line as an input and tell the category it belongs to.
So let’s get into some code.
const brain = require('brain.js'); const data = require('./data.json');
This above code uses the library to create a long short term memory (LSTM)neural network which is trained with about data for 2000 iterations.
For better results, we train our model many times with the same data to get more accuracy in results. Think of it like doing the same example question many times, until you get it perfect without making any mistakes.
You can test you network like this:
const output = network.run('I fixed the power suppy'); // Category: hardware
const output = network.run('The code has some bugs'); // Category: software
console.log(`Category: ${output}`);
And yes, you’ve built yourself a more complex machine learning model which computes category based on the statement belongs to.
What are you waiting for?
Go and Show Off your neural networks !!
In case we’re meeting for the first time here, I am Pradyuman Dixit and I mostly write about Machine learning, Android Development and sometimes about Web Development.
You can read my other Machine Learning posts here:
Understanding machine learning using simple code examples.
“Machine Learning, Artificial Intelligence, Deep Learning, Data Science, Neural Networks”
You must’ve surely read somewhere about how these things are gonna take away future jobs, overthrow us as dominant species on earth and how we’d have to find Arnold Schwarzenegger and John Connor to save humanity.
With current hype, there is no surprise you might have.
But, what is Machine Learning and what is Artificial Intelligence?
Machine learning is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific taskwithout using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence.
And what is Neural Network?
Artificial neural networks or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.
Yes, machine learning is a subset of Artificial Intelligence, and no, AI does not mean Terminators who’ve spread using the internet and have a hub called Skynet.
Traditional forms of programming rely on a specific set of instructions for the computer in a specific language, which a compiler turns to assembly and machine code which the CPU understands and executes.
So, we decide what computer would give us for output. We’re the brain in this process and we tell computers exactly what to do and when to do it.
That’s the general premise of traditional programming.
Machine learning here is bit different. You use data to train your machine learning model to let the machine make decisions based on the outcomes of this training.
Confused? Here’s an example:
How do you understand any new concept as a human? You see the concept, try to figure out what it’s saying, see some examples and understand how to do similar examples using the same concept.
Right?
This is exactly what machine learning is, except here we give the examples to our model which chunks out the output based on previous outputs found in the data.
Yes, this joke somewhat coarsely represents how machine learning works.
Now that you simply understand the concept of machine learning, let’s get into some simple code examples.
Here, I’ll be using the machine learning library ‘brain.js’ and JavaScript and Node.js.
For this example we’d be using a simple and very small amount of data.
So we have 4 football teams as input, namely 1, 2, 3, 4. Why are these not interesting names and just numbers?
Well, I am not that innovative, blame my un-originality.
So, if the output is 0, that means first team won and if the output is 1 then that means second team won. E.g. input: [1, 3], output: [1] → Team “3" won.
So, now let’s code this.
// Here we used the brain.js library to get a ready neural network const brain = require('brain.js'); const network = new brain.NeuralNetwork();
// Now let's train the data network.train([ { input: [1, 2], output: [1] }, // team 2 wins { input: [1, 3], output: [1] }, // team 3 wins { input: [2, 3], output: [0] }, // team 2 wins { input: [2, 4], output: [1] }, // team 4 wins { input: [1, 2], output: [0] }, // team 1 wins { input: [1, 3], output: [0] }, // team 3 wins { input: [3, 4], output: [0] } // team 3 wins ]);
This code trained your neural network on the basis of your data provided. Now you can get probable output for any team’s winning using machine learning.
And yes, you’ve built yourself a machine learning model which has been trained using your data, which can predict which team would win based on that data.
But of course, real world machine learning can’t rely on 7 lines of input data. Lots of data is used to get desirable results with the maximum accuracy possible.
So let’s get into another example with a larger amount of data.
We’d use this data file for our input data. Naming the file as (data.json).
[ { "text": "my unit test failed", "category": "software" }, { "text": "tried the program, but it was buggy", "category": "software" }, { "text": "i need a new power supply", "category": "hardware" }, { "text": "the drive has a 2TB capacity", "category": "hardware" }, { "text": "unit-tests", "category": "software" }, { "text": "program", "category": "software" }, { "text": "power supply", "category": "hardware" }, { "text": "drive", "category": "hardware" }, { "text": "it needs more memory", "category": "hardware" }, { "text": "code", "category": "software" }, { "text": "i found some bugs in the code", "category": "software" }, { "text": "i swapped the memory", "category": "hardware" }, { "text": "i tested the code", "category": "software" } ]
The above JSON data file has some sentences and a category has been allocated to it.
Our machine learning model will take a line as an input and tell the category it belongs to.
So let’s get into some code.
const brain = require('brain.js'); const data = require('./data.json');
This above code uses the library to create a long short term memory (LSTM)neural network which is trained with about data for 2000 iterations.
For better results, we train our model many times with the same data to get more accuracy in results. Think of it like doing the same example question many times, until you get it perfect without making any mistakes.
You can test you network like this:
const output = network.run('I fixed the power suppy'); // Category: hardware
const output = network.run('The code has some bugs'); // Category: software
console.log(`Category: ${output}`);
And yes, you’ve built yourself a more complex machine learning model which computes category based on the statement belongs to.
What are you waiting for?
Go and Show Off your neural networks !!
In case we’re meeting for the first time here, I am Pradyuman Dixit and I mostly write about Machine learning, Android Development and sometimes about Web Development.
You can read my other Machine Learning posts here:
For web designers today, creating a website can mean a whole lot than just functionality, usability and aesthetic appeal. Today, every new-born website requires a thorough integration of Search Engine Optimization (SEO) protocols to become crawlable and get indexed by search engines such as Google.
A good website can attract great amounts of traffic. However, to make sure your traffic is relevant, geo-specific, and hails from the target segment, you must utilize SEO properly. According to one piece of HubSpot research, 77% of people research a brand before getting in touch with it. This means your site design, structure, content, and marketing practices must be spot on if you want spectacular search results!
Both off-page and on-page SEO are imperative to the ranking process for any website on Google. Here, we are going to discuss why web designers should know about on-page SEO well enough to create a website that not only attracts visitors, but also ranks on top of Google search engine result pages (SERPs).
1. Higher Rankings
On-page SEO involves many elements such as HTTP status code, URLs and their friendliness with the search engine. Other aspects include the correct addition of meta tags, descriptions and further heading tags on your search link on Google SERPs. All of these elements make a huge difference in on-page SEO. Therefore, a web designer who knows these details must know when to apply them in the right order so that the website receives higher rankings on Google.
2. Greater Search Accuracy
With the growing number of internet users, the demand of the data has also increased. There are so many brands for a similar product, over hundreds of online stores, and numerous branches of the same brand. Before any potential customer makes an appearance in a store, they are highly likely to search them on the internet. The statistics clearly support this as 18% more shoppers prefer Google over Amazon for searching a product and 136% of times a search engine is preferred over other websites for the same purpose. Similarly, local searches lead 50% of the mobile users to take a tour to the nearby store within 24 hours. This further necessitates web designers to readily know about on-page SEO so that the client’s business page is more visible on web.
3. More Mobile Traffic
The state of inbound reporting suggests that generating traffic is one of the main marketing challenges faced by website designers and marketers. Website designers have the opportunity to integrate SEO metrics from the start and not only make the website more user-friendly, but device responsive as well. According to marketing technology facts by Sweor 57% of the mobile users abandon a brand’s website if it has a poor mobile responsive website. SEO helps you improve these flaws and add in high-quality visual content for better marketing. Designers can use this to their advantage and focus on building an attractive, rankable and responsive website.
4. Higher Engagement
In the present era, every online brand is reflection of how far up it is on Google rankings. On-page SEO helps build a strong network of internal linking that keeps the user engaged on the website by offering them more valuable information on the right time.
It also helps brings exposure to those sectors of the website that need more attention and helps generate a positive user experience from the visitor. This helps the brand focus on its goals and deploy different marketing strategies to boost revenues.
5. Impartial Benefits for SMEs
While large businesses may dominate the small ones in terms of size, operations and employee strength, SEO does not discriminate between SMEs and Large enterprises. SEO does not require a sizeable investment and most entrepreneurs and SMEs can afford hiring a few resources or even build their own department. However, SMEs with constrained budgets may not be able afford a dedicated department for SEO. Therefore, web designers must know SEO beforehand since there is no guarantee they will get any guidance from the company when the website gets live.
6. More Quality Traffic
Designing a website with proper on-page SEO helps Google’s spiders to crawl through your URLs faster and index your pages more relevantly on their SERPs. Research conducted by Moz suggests that 71.33% of clicks made on a website are present on the first page of search results. This means that more and quality traffic would be driven to your website generating more leads, increasing the conversion rates and ROI as well.
7. Using Innovative Technologies
Content has a direct effect on your customers. According MindMeld, 60% of the users have started using voice search features to interact with search engines when making queries. This means that the designers now need to optimize the website and content for voice search as well. According to Backlinko, the average word length that helps rank the website in the first page of Google is 1890 words. Also, the use of most suitable keywords gives your website ranking a boost bringing it on the first result page of the search engine. To get more advanced SEO features, web designers also deploy SEO extensions for more optimized performance and cost effectiveness.
8. Increases Page Loading Speed
Every website designer knows that loading speed plays a deciding role in online rankings as well as user experience. Some of the factors that lower the webpage speed are the large images, bad URLs and coding, and themes with too many widgets. Thus, knowing on-page SEO helps the designer avoid such errors when designing the website, improving its loading speeds far more efficiently as compared to when it is operational.
9. Greater User Experience
You must be wondering how SEO improves the UX, right? Well, good SEO offers informative, readable and highly usable content to the readers. Also, it helps to design a visually attractive website that is nicely navigated and performs well. These features make users happy and enhance their experience on the web page. So if you’re planning to leave a long lasting impression right from the start, you must put in some on-page SEO from the beginning.
10. Cost-Effectiveness
Its irrefutable that SEO has a great cost advantage. A skilled web designer knows how well systematic integration of on-page SEO can save costs that can pile up later if the website starts getting traffic. Everything from page titles, meta descriptions, meta tags, URL structure, body tags, keyword density down to image SEO must be prepared prior to its operation stage. Neglecting these key points can be detrimental to the website’s overall progress and may result on expensive retro-fitting at a later date.
A large majority of marketers wouldn’t consider themselves psychologists. Yet understanding the growing field of marketing psychology can help persuade and influence audiences in powerful ways online.
Great campaigns happen at the intersection of marketing and psychology. The sweet spot where your content and messaging connects with your audience on a deep, human level.
This week on The Science of Social Media, we’re exploring six powerful psychological biases and how they influence human behavior online. Knowing the factors that affect decisions will help take your social media marketing to the next level.
Let’s dive in!
6 Powerful Biases That Influence Human Behavior
Psychological Bias 1: The Bandwagon Effect
Psychological Bias 2: Zero Risk Bias or the Certainty Effect
Hailley: Let kick off the show by quickly exploring how a psychological bias is defined. In this case, we’re talking about cognitive biases:
Cognitive and psychological biases are defined as repetitive paths that your mind takes when doing things like evaluating, judging, remembering, or making a decision.
Just like instincts, they evolved so that we don’t have to think as much for every decision that we make and they help us conserve energy.
Brian: More than knowing these for marketing, it’s also really interesting to be able to pinpoint your own cognitive biases to be aware of what goes into your decisions, judgments, and opinions.
Let’s get started looking into each of the biases we’ve identified and how they relate to marketing. And just a note that there are a lot of cognitive biases in our brains, these are just a select few that we found particularly useful for marketing but this is in no way a complete list.
1. The Bandwagon Effect
Hailley: Let’s start with one that most people have probably already heard of, it’s called the bandwagon effect. You’ve probably seen the expression “they jumped on the bandwagon” and that’s what this cognitive bias is referring to.
The idea is that the rate of uptake of beliefs, ideas, fads, and trends increases the more that they have already been adopted by others.
In other words, the bandwagon effect means that it’s more likely for someone to do, say, believe something if a high number of other people have already done so. This is sometimes also called groupthink or herd behavior.
Brian: In social media, I’ve seen this happen before where maybe a new social network opens up and then it feels like everyone, celebrities, other marketers, friends, are all joining up so you end up joining, too.
It’s pretty easy to imagine how helpful this can be with marketing. If it feels to a new user like everyone loves your product then they’re more likely to love your product too.
Some of the ways we can work to use this perception to our advantage are for example testimonials. If you have a lot of testimonials it might feel like everyone loves your product, company, or business.
Hailley: Exactly. And it’s interesting because I really think user-generated content can help a lot here if you share photos on Instagram of all the other customers enjoying your product for example.
And even influencer marketing can add to this effect if it starts to feel like a lot of influencers love your product then people are more likely to jump on the bandwagon.
Ultimately, this cognitive bias is all about critical mass so you have to have the numbers for this feeling to really take root.
2. Zero Risk Bias or the Certainty Effect
Brian: Next up let’s chat about zero risk bias or the certainty effect. It is exactly what it sounds like. Essentially our minds have a tendency to favor paths that seem to have no risks, they are certain.
This is why you see many brands and businesses offer money-back guarantees and risk-free trial offers. This feeling of zero-risk is really appealing to customers especially when it’s a new product or service that they’re experiencing.
Here, the more you can reassure customers and potential customers of limited risks, the more they are likely to feel better about their decision and that decision will even come to them more easily.
Hailley: I’ve seen this done in a lot of great Instagram video ads recently where on top of showing the product there’s always text that says “money back guarantee” or “risk-free.” Or in social media posts with a photo of your product, you can also play to the certainty effect in your text as well.
While it’s easy to use this one on your website for the copy it’s also definitely possible to use it in your advertising and social posts as well and that will really help leverage the zero risk message to your advantage.
3. In-Group Favoritism
The next bias on our list is in-group favoritism. This means that people prioritize products and ideas that are popular with a group they’ve aligned themselves with.
Brian: It really makes you realize how powerful your identity is. Let’s say you’ve aligned yourself with a certain sports team, well the data shows that you and everyone else who identify with that sports team are more likely to buy similar products and use similar services. Essentially, you and the others who identify as a group favor specific things.
However, experiments have suggested that group identities are flexible and can change over time so keep that in mind as well.
One really good example of a company leveraging this was Apple. They really built an “us vs. them” mentality among their customers with their marketing campaigns and essentially created their own in-group.
Hailley: What this means in terms of marketing psychology is that if you can find these identity markers and you know what in-groups your customers and potential customers are aligned with you can choose your marketing strategy accordingly.
For example, you can join a bunch of communities where your ideal customer hangs out to learn more about their identifiers or create surveys among your current customers to learn about identifiers. And it might take a bit of work but you can also create your own community and in-group if that’s a good fit for your social media marketing!
Quick shout out, check out www.buffer.com/slack if you want to join our Buffer community on Slack.
4. Confirmation Bias
Brian: Another really popular bias is called confirmation bias.
This is the effect where our mind searches for, interprets, favors, and recalls information that confirms or amplifies beliefs that we already have.
This effect is even stronger when it comes to emotionally charged issues and for deeply entrenched beliefs so in those cases instead of looking for new information people stick to what they already believe.
It’s easier and less work for your brain to stick to your current beliefs than to have to go through the decision-making process and choose a whole new set of beliefs all over again.
Instead, your brain is looking to back up your current beliefs and reassure you that it was a good decision.
It’s interesting to note that even scientists fall prey to this bias, it is very common.
Hailley: When it comes to confirmation bias, it also tends to contribute to overconfidence in personal beliefs.
First and most importantly you really need to know your audience and what their existing sets of beliefs are. You can do this by checking out a few of them on social media and looking for articles they share or asking them if you’re in touch with customers often. There are quite a few ways!
From there, you can share information from your brand that they already believe to be true. And if you do that well, they’re going to already agree with the content that you’re sharing and they’ll put more trust and belief in your brand as well.
Brian: Another way to incorporate this is to have really detailed product descriptions that assure people of the things they likely already believe about your product or business, maybe related to the quality or customer service, or value.
5. The Endowment Effect
Hailley: We have two more cognitive biases for you today. One is called the endowment effect and this is already quite popular among marketers.
It’s the idea that people assign more value to things merely because they own them.
So as marketers there are a lot of ways to play to this effect. Think of free coupons, free trials, and sample products.
All of those experiences create a sense of ownership that people are less likely to want to give up.
Focusing on marketing psychology, we offer free trials at Buffer and once someone has spent the time importing their social media accounts and incorporating Buffer into their routine they are less likely to want to give up the ownership they feel over their account when the trial ends.
Brian: The strange thing about the endowment effect as well is that people tend to pay more to keep something they already own than to get something new that they do not own — even when there is no cause for attachment, or even if the item was only obtained minutes ago.
If you’re in a mall and you buy an expensive sweater and then go to the store next door and see another nice sweater for even less money, you’re more likely to keep the original expensive sweater because you’ve already claimed ownership over it.
This point in marketing psychology is also related to status quo bias. This bias talks about how people like things to stay the same. So it, sort of, works in tandem with the endowment effect because once you have ownership over something you want that to stay the same, you don’t want to give it up.
6. Not Invented Here
Hailley: Let’s talk about something called “not invented here.”
Not Invented Here is the aversion to use products or accept ideas that are developed outside of a group. As a social phenomenon, this can manifest as an unwillingness to adopt an idea or product because it originates from another culture.
If you as a customer don’t recognize, identify with, or understand a product or service you’re less likely to use it.
Brian: Exactly, and a very common way to overcome this bias is for newer companies to align themselves with well-known brands in content partnerships or swaps.
Another common way to use this marketing psychology is to feature the logos of well-known media companies on your website because if someone trusts the opinion of a place like.
FastCompany for example, and you’ve been featured in FastCompany, then that person is more likely to trust your brand because you’re associated with them.
Want more posts like this? We write stories of businesses doing great stuff on social media, the latest social media experiments to try, and news and trends that’ll help you succeed on social media. Subscribe here →
This blog post was first written on the Buffer Social blog on January 21, 2019.
Apple’s “Shot on iPhone” photo contest sparked some controversy back in January over whether it would pay winning photographers to use their photos in ads, but the company quickly clarified that licensing fees would be paid. Well, the contest has ended, and Apple has just unveiled the winning iPhone photos.
The 10 winning photographers represent countries all around the world and were selected by an international panel of judges: Pete Souza, Austin Mann, Annet de Graaf, Luísa Dörr, Chen Man, Phil Schiller, Kaiann Drance, Brooks Kraft, Sebastien Marineau-Mes, Jon McCormack and Arem Duplessis.
Here are the winners along with explanations by select judges on why the photo was picked:
Chen Man says: “This is a photo filled with lovely color and sense of story in the composition. Zooming in, you can see details of each family and their unique touch. The basketball hoop is placed right in the middle of the photo, adding more stories behind the image.”
Annet de Graaf says: “The narrative in architecture. There is actually life behind the surface of an average apartment building in an unknown city. Vivid colors and a perfect composition with the basketball board right in the middle! Great eye.”
Austin Mann says: “This image took a lot of patience and great timing … with the iPhone’s zero shutter lag and Smart HDR, we’re able to see both the raccoon’s eyes and the deep shadows inside the log … something that would have previously been nearly impossible with natural light.”
Phil Schiller says: “The stolen glance between this raccoon/thief and photographer is priceless, we can imagine that it is saying ‘if you back away slowly no one has to get hurt.’ A nice use of black and white, the focus on the raccoon and the inside of the hollow log provides an organic movement frozen in time.”
Phil Schiller says: “A reflection that looks like a painting, two worlds have collided. You are compelled to think about where and how this photo was taken, the bird flying in the corner provides the single sign of life in an otherwise surreal composition.”
Chen Man says: “Distortion and reflection at a strange angle — this photo creates a fantastic feeling.”
Austin Mann says: “I love how accessible this image is: You don’t have to travel to Iceland to capture something beautiful, it’s right under your nose. The way the lines intersect, the vibrant color, the sense of old and new … this is just a great image.”
Luísa Dörr says: “I like the simplicity of this image, the composition, light, details, everything looks good. Then you see one small line that looks wrong and makes me think what happened, where is this place, who was there. For me a good image is not only one that is strong or beautiful, but makes you think about it — and keep thinking.”
Sebastien Marineau-Mes says: “Love how the heart shaped water puddle frames the subject, capturing a glimpse of the world as the subject hurriedly walks past.”
Brooks Kraft says: “A unique perspective and a new take on the popular subject of shooting reflections. I like that the subject is evident, but you are not really sure how the photo was taken. The puddle is the shape of a heart, with nice symmetry of the subject. The depth of field that iPhone has in regular mode made this image possible, a DSLR would have had a difficult time keeping everything in focus.”
Brooks Kraft says: “A portrait that captures the wonderment of childhood in a beautiful setting. Great composition that shows both the personality of the child and the experience in the surroundings.”
Pete Souza says: “Nice portrait and use of background to provide context. The placement of the child’s face is in an optimal place — lining her up so the background directly behind her is clean and not distracting. The setting is a familiar — I’ve probably stood in this exact spot. But the picture is not like any I’ve seen from this location.”
Jon McCormack says: “This image is very well thought through and executed. The background pattern holds the image together and the repeated smaller versions of that pattern in the water droplets create a lot of visual interest. The creative use of depth of field here is excellent.”
Sebastien Marineau-Mes says: “Very unique composition and color palette, playing to the strengths of iPhone XS. What I find most interesting is the background pattern, uniquely magnified and distorted in every one of the water droplets. I’m drawn to studying and trying to elucidate what that pattern is.”
Kaiann Drance says: “Looks like a simple scene but a good choice of using black and white to elevate it with a different mood. Helps to bring out the dramatic contrast in the clouds and the surrounding landscape.”
Luísa Dörr says: “I feel like this landscape was treated like an old portrait. The texture of the mountains evokes an old wrinkled face. Portraits and landscapes are the oldest way of creative representation by humans. There’s something about it that belongs to the realms of the subconscious mind, and this is mainly what appeals me of this picture; the part that I’m not able to explain.”
Kaiann Drance says: “Gorgeous dynamic range. There’s detail throughout the photo in the meadow, trees, and clouds. Beautiful deep sky and pleasing color overall.”
These 10 winning photos will now be featured on Apple’s billboards (in select cities), stores, and website.
from Sidebar https://sidebar.io/out?url=https%3A%2F%2Fpetapixel.com%2F2019%2F02%2F26%2Fhere-are-the-winners-of-apples-shot-on-iphone-photo-contest
In 1994, the Italian housewares manufacturer Alessi released Anna, a corkscrew topped with a woman’s smiling face. It was created by Italian architect, artist, and designer Alessandro Mendini and inspired by Mendini’s friend, the designer Anna Gili. As you stab the screw into a cork and twist, Anna’s arms rise up over her head in a silent hallelujah to the wine-fueled revelry that awaits. Today you can buy all manner of wine openers: electric ones, air pressure pumps, one-handed varieties. But how many corkscrews can make you laugh out loud?
Exuberant design was Mendini’s specialty. Mendini died last week, age 87, and his death leaves a void in the school of thought that favored emotion and surprise over the cold efficiency that has come to dominate much of design, calibrated as it is to the precise and bottomless needs of the technology industry.
Mendini was trained as an architect, but he had deep roots in the art world. A postmodernist, he was a central figure in Italy’s Radical Design movement, which sought to imbue art in design, and which served as a precursor to the influential Memphis style that today’s young designers (and many a corporate copycat) have revived and remixed for mainstream consumers. Mendini’s Proust chair, designed in 1978, was a feverish bricolage: an upholstered Baroque armchair splattered in a pointillist painting by the neo-Impressionist Paul Signac, with a name lifted from French literature. An icon of postmodernism, it is now in the permanent collection of the Museum of Modern Art and the V&A in London.
Mendini also worked as a journalist and was editor of the prestigious Italian architecture magazine Domus from 1979 to 1985. But his popular legacy will be most pronounced in the dozens of kitchen products and home decor he developed for Alessi and others. Each is a testament to the idea that design is not merely a vehicle for solving problems; it can be a source of simple pleasure.
Anna, for instance, was so beloved, Mendini reprised her likeness in a champagne cap, a pepper mill, a tea set, a kitchen timer, and a bottle cap. You could give your kitchen’s entire top drawer over to Anna’s goofy grin if you were so inclined. Consider how unusual it was to portray a literal face in industrial design in the 20th century, at a time when Mies van der Rohe’s ubiquitous catchphrase “less is more” represented the peak of taste and sophistication. The tyranny of minimalism continues today. Take the many smartphones and smart speakers that are designed to be so invisible, they’re easy to forget altogether–to the detriment of consumers. Mendini’s work offers a refreshing antidote. His design was never quiet. He didn’t shy away from figurative representation. Another one of his kitchen designs is a parrot-shaped corkscrew, the feathers of which have the same frenzied print of the Proust chair. Open a bottle of wine, and watch the bird flap its dazzling wing.
Even Mendini’s subtler designs felt revelatory. For Kartell, the Italian manufacturer of high-end plastic furniture, he created Roy, a series of side tables that resemble colorful stools from afar. Up close, the resonant patterns of Roy Lichtenstein’s Pop Art appear on the surface. Another kitchen item for Alessi, the Tegamino pan, looks like any other pan. But it has undulating handles that conjure up the gooey textures of a scrambled egg and revel in the pot’s reason for being: to put warm food in your mouth.
It seems like kismet that some of Mendini’s last Alessi designs were for children. The Alessini collection, a set of whimsical plates, bowls, cups, and cutlery, was designed to capture the imagination of the most naturally curious among us. There’s a radical dignity to them, and to all Mendini’s works. They suggest that consumers are worthy of joy and pleasure, that the mundane but crucial rituals in our lives–cooking, drinking, spending time with children–are not merely chores to slog through, but moments to celebrate. We are what we eat, and what we eat it in.
from Fast Company https://www.fastcompany.com/90310020/the-lost-art-of-designing-for-pleasure?partner=feedburner&utm_source=feedburner&utm_medium=feed&utm_campaign=feedburner+fastcompany&utm_content=feedburner