To date, many of us have achieved success by being “smarter” than other people as measured by grades and test scores, beginning from our early days in school. The smart people were those that received the highest scores by making the fewest mistakes.
AI will change that because there is no way any human being can outsmart, for example, IBM’s Watson, at least without augmentation. What is needed is a new definition of “being smart,” one that promotes higher levels of human thinking and emotional engagement.
Andrew Ng has likened artificial intelligence (AI) to electricity in that it will be as transformative for us as electricity was for our ancestors. I can only guess that electricity was mystifying, scary, and even shocking to them — just as AI will be to many of us. Credible scientists and research firms have predicted that the likely automation of service sectors and professional jobs in the United States will be more than 10 times as large as the number of manufacturing jobs automated to date. That possibility is mind-boggling.
So, what can we do to prepare for the new world of work? Because AI will be a far more formidable competitor than any human, we will be in a frantic race to stay relevant. That will require us to take our cognitive and emotional skills to a much higher level.
Insight Center
Sponsored by Accenture
Analytics are critical to companies’ performance.
Many experts believe that human beings will still be needed to do the jobs that require higher-order critical, creative, and innovative thinking and the jobs that require high emotional engagement to meet the needs of other human beings. The challenge for many of us is that we do not excel at those skills because of our natural cognitive and emotional proclivities: We are confirmation-seeking thinkers and ego-affirmation-seeking defensive reasoners. We will need to overcome those proclivities in order to take our thinking, listening, relating, and collaborating skills to a much higher level.
I believe that this process of upgrading begins with changing our definition of what it means to “be smart.” To date, many of us have achieved success by being “smarter” than other people as measured by grades and test scores, beginning in our early days in school. The smart people were those that received the highest scores by making the fewest mistakes.
AI will change that because there is no way any human being can outsmart, for example, IBM’s Watson, at least without augmentation. Smart machines can process, store, and recall information faster and better than we humans. Additionally, AI can pattern-match faster and produce a wider array of alternatives than we can. AI can even learn faster. In an age of smart machines, our old definition of what makes a person smart doesn’t make sense.
What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement. The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality. And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.
We will spend more time training to be open-minded and learning to update our beliefs in response to new data. We will practice adjusting after our mistakes, and we will invest more in the skills traditionally associated with emotional intelligence. The new smart will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears. Doing so will make it easier to perceive reality as it is, rather than as we wish it to be. In short, we will embrace humility. That is how we humans will add value in a world of smart technology.
from artificial intelligence – Google News http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNFoIeppDt_3ZN7KvBSMtqlOIvdnWw&clid=c3a7d30bb8a4878e06b80cf16b898331&ei=st9IWdj_O6v4hQGYjKWQBw&url=https://hbr.org/2017/06/in-the-ai-age-being-smart-will-mean-something-completely-different
“Fundamental physics shows how hard it is for us to grasp even the simplest things in the world. That makes you quite skeptical whenever someone declares he has the key to some deeper reality.”
Adding to the canon of these meditations is the celebrated English cosmologist and astrophysicist Sir Martin Rees — the last European court astronomer in his position as Astronomer Royal to the House of Windsor and science adviser to the Queen of England.
I was brought up as a member of the Church of England and simply follow the customs of my tribe. The church is part of my culture; I like the rituals and the music. If I had grown up in Iraq, I would go to a mosque… It seems to me that people who attack religion don’t really understand it. Science and religion can coexist peacefully — although I don’t think they have much to say to each other. What I would like best would be for scientists not even to use the word “God.” … Fundamental physics shows how hard it is for us to grasp even the simplest things in the world. That makes you quite skeptical whenever someone declares he has the key to some deeper reality… I know that we don’t yet even understand the hydrogen atom — so how could I believe in dogmas? I’m a practicing Christian, but not a believing one.
The central problem of religious dogma, of course, is that the mythology of “God” offers a single cohesive story that contends to explain all of “Creation” — a theory that claims its truthfulness not by empirical evidence but by insistent assertion. In a sentiment that calls to mind Sagan’s abiding wisdom on the vital balance between skepticism and openness, Rees illustrates how the scientist regards a theory:
I find it irrational to become attached to one theory. I prefer to let different ideas compete like horses in a race and watch which one wins.
When asked whether he believes that scientists make more intelligent decisions as citizens, Rees responds:
[Scientists] bring a special perspective to things. For example, as an astrophysicist, I’m used to thinking in terms of extremely long periods of time. For many people, the year 2050 is distant enough to seem unimaginably far away. I, however, am constantly aware that we’re the result of four billion years of evolution — and that the future of the earth will last at least as long. When you always have in mind how many generations might follow us, you take a different attitude toward many questions of the present. You realize how much is at stake.
With an eye to the progress and peril that human civilization has wrought, Rees considers the prospective evolutionary future of a post-human intelligence:
We humans of the present are certainly not the summit of Creation. Species more intelligent than us will inhabit the earth. They might even appear quite soon. These days evolution is no longer driven by slow natural development, as Darwin described it, but by human culture. So a post-human intelligence might be made by us ourselves. And I hope that our successors have a better understanding of the world.
Bringing you (ad-free) Brain Pickings takes me hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.
newsletter
Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s most unmissable reads. Here’s what to expect. Like? Sign up.
from Brain Pickings https://www.brainpickings.org/2017/06/19/martin-rees-interview-science-religion/
Just like Google’s become ubiquitous in our everyday lives — and a popular verb in our language — its influence on best practices in the tech industry is enormous. Nearly 20 years after its founding, the company has literally shaped a generation of tech professionals. To an even greater extent, it’s molded product managers to build high-quality consumer products at scale.
In many ways, this is a great thing. Together, Google and Facebook (and their brethren Amazon, Uber, Snapchat, Twitter, Dropbox…) have produced product leaders who’ve changed global conversations about how to innovate, how to assemble happy teams, how to test, iterate and learn. But according to Ogi Kavazovic, CMO and SVP Product Strategy at Flatiron Health, this powerful tide has left a crucial gap in its wake: All of these PMs — now spread across dozens of tech companies — while skilled at building consumer-facing products, are coming up short when they apply the same strategies to build winning enterprise software.
In this exclusive interview, Kavazovic identifies the two most common ways B2B product orgs get stuck, and how to get them back on track. He also makes a compelling case for enterprise software PMs to let go of what they’ve been taught to build more successful teams.
The Problem
Simply stated: Too many product leaders attempt to develop enterprise software using a consumer app playbook.
In Kavazovic’s experience, when product managers leave the consumer sphere for a B2B role, they bring their well-worn tactics and hit the ground running. The trouble is, the product development cycles and customer relationships they encounter are fundamentally different. There are two common — and potentially debilitating — ways things can go wrong:
1. Falling in love with agile at the expense of a clear product vision.
“These days, agile is essentially the law of the land if you’re in product management or engineering,” says Kavazovic. But the resulting emphasis on sprints and short-term planning can lead to a lack of a larger product vision. It’s also often incompatible with the longer planning cycles of enterprise customers and partners.
When customer-facing teams bump up against an agile product org, that incompatibility can quickly turn into friction. “A sales reps may say to a PM, ‘Hey my customer is asking what we’re doing with the product over the next year or so?’ And the PM will likely say something like, ‘Oh, we’re not sure yet, we’re agile’ — usually paired with a hint of disdain in their voice.” Product-focused teams are trained to think in one to three month chunks. They value agility and optionality, and avoid anything that sounds like a long-term commitment at all costs.
The downside of this agile orthodoxy is that its short-term focus can make it very difficult for the sales and BD teams to close that next big 5-year deal or key strategic partnership.
“It becomes a point of tension in B2B companies really quickly,” says Kavazovic. More often than not, that tension is temporarily resolved when sales or marketing draws up hasty, if well-intentioned, pictures of where the product could be going.”
That seems like an okay fix in the short term, but if you let this happen over an extended period of time, marketing artifacts can become the de facto product vision, and a dynamic forms where the product management team is wondering who’s actually setting the product direction. Are they? Or are the sales and marketing and BD teams?
Kavazovic recalls working with a talented product lead who’d come to his prior company, Opower, from Google. “He was great and came in with a lot of best practices from his prior role: He implemented OKRs for the first time, and people loved it. He talked about the product manager being the CEO of the product and led the organization from a technology-driven place.” His impact on team motivation was fast and positive. Be he also brought another value from his B2C background — in order to protect the product organization and remain agile, he limited the “committed roadmap” to six months (and anything past three months out, even, was a pretty loose commitment).
When Opower got their next RFP from a large customer, the lack of a longer-term product vision quickly became a real issue. “It was a big deal event in the industry — a 5-year deal with one of the biggest utilities in the country. The customer was planning a rollout of 18 to 24 months in the future.” Naturally, they had questions about what functionality they could expect two or three years down the road — and Opower’s sales and marketing teams were caught flat-footed. “It was a last-minute scramble, we really needed to win this deal,” says Kavazovic. The team spent 10 crazed days shaping a product plan that would fit that customer’s needs.
In the end, it was a happy outcome. The deal was signed. “But it was definitely one of those cases where, for the subsequent two years, we were locked into a half-baked product vision that really came together over the course of a handful of days, and some very late nights. It was a bitter-sweet moment. We all wished we had more time to do the proper research needed to create a longer-term vision that we were all more confident in.”
2. Focusing on user research at the expense of better understanding market dynamics.
The other way B2B product management can go off the rails is forgetting, in most cases, that the user is not the buyer.
This might sound blasphemous to a B2C product manager. Of course the user should be top of mind. And they’re right — in the consumer space, if users love your product, you’re on the right track. “If Google Maps is great and everybody wants to use it, that equals success,” says Kavazovic.
In the B2B world, though, users may not have much of a voice when it comes to a buying decision. A department VP may be the buyer, but it’s the team working for her who’ll actually be using the product. When you’re selling to a business, you need to understand every factor that goes into its purchasing decisions — and it’s quite possible that how delightful a product is to use won’t be anywhere near the top of that list. Therefore, you need to listen — not to the voice of the user but — as Kavazovic calls it, the voice of the market.
Heeding the voice of the market requires looking at all the broader forces — what your competition is pitching, upcoming regulations, as well as the ambitions and needs of your biggest and most important customers (current and prospective).
“All of these things need to be fully considered in order to make the right product strategy decisions,” says Kavazovic. “And that’s quite a bit of work.” While there are plenty of industry-standard B2C best practices for how to do user research, he’s found a void when it comes to baking various market forces into your roadmap. As a result, product teams either stick to what they know — well-run focus groups and user research — but stumble through an ad hoc approach incorporating all the market dynamics that will likely influence a buyer’s decision.
Often, the result is a rude awakening when the sales team gets into the field. “We thought we were doing great at Opower — the current customers were happy and we were confident we had the best product in the market from a user experience perspective,” says Kavazovic. “Then out of nowhere we lost the next three deals — a new competitor backed by a well-known Silicon Valley billionaire interested in cleantech entered the game. We soon found out they were winning because they pitched a grander platform vision about where the market was going and where the product may need to be years from now, complete with very convincing mocks and demos. We started seeing customers partnering with a company whose product didn’t even exist.
In the B2B world, you can have great underlying tech and a superior user experience, but still lose badly to a competitor selling ‘the future.’
Kavazovic near the Flatiron office in New York.
The Solution
Bridging this gap between B2C training and B2B needs is about one thing: adopting a hybrid approach to strategic planning.
In Kavazovic’s experience, the two pitfalls described above boil down to a key misunderstanding: agile development and longer-term planning are NOT actually the mutually exclusive modus operandi the tech world has portrayed them to be.
There’s a quote that’s stuck with him, from someone who doesn’t pop up on TechCrunch too often: Dwight Eisenhower. “He said, ‘Plans are useless, but planning is indispensable.’ It occurred to me when I read that for the first time that this tension between staying agile and strategic planning is something the military has been dealing with for generations.”
Eisenhower knew that any plan crafted before battle would be obsolete at first contact with the enemy. In his work, Kavazovic wants to be this realistic too. “Translating this into tech: no long-term plan or product vision survives contact with the user in the product-design sense. That’s why agile methodology is specifically designed to create user experiences that work,” he says. “It’s absolutely suboptimal to design a particular product all the way down to years’ worth of features, make that the blueprint, and build it out.” Inevitably, sticking to a rigid long-term plan without a mechanism to iterate on user feedback would result in features users don’t want, costly re-dos and potentially total product failure.
But there’s still a vital difference between consumer and enterprise sales: Selling to users vs. selling to buyers.
“Agile is really good for making sure that you create a successful user experience. But it’s important to separate that from the overall product roadmap, which requires meeting the needs of your buyer.” The key is to take a two-pronged approach: 1) articulate a long-term product vision, but 2) establish a culture of flexibility when it comes to the details.
“If you’re a B2B product manager, you now have two deliverables. One is a high-level roadmap — I think a healthy timeline is between 18 to 24 months,” says Kavazovic. “That document is sometimes called the ‘vision roadmap,’ and includes big, directional boulders. It should be exciting! Importantly, it comes with hi-fi mocks — something that can be used to bring it to life, to galvanize the troops internally — especially engineering — to stay ahead of a competitor pitching vaporware, and to compel a strategic buyer or a partner that you’re the right long-term choice. The benefits are manifold.”
For day-to-day execution, you’ll also need a shorter-term, development roadmap. “This one is the real brass tacks. It’s your next one to three months, broken down by feature, and spelling out the committed, ‘shovel ready’ plan that the engineers will execute on.”
By bifurcating the process, you arrive at two guiding artifacts, each with its own purpose and process:
The Long-term Plan & Roadmap
At a startup (no matter how big), the whole company needs to be bought into and feel ownership of this overarching vision, so it should be the product of cross-functional teamwork. Opower’s process was months long and carefully formalized: Every department had a representative on the strategic planning team for a given product, ranging from the executive team to customer support to BD — and of course product management and marketing.
Together, the cross-functional group produces two artifacts: “One is what is sometimes called the market requirements document — that’s the voice of the market. At Flatiron, this is everything from what our salespeople are hearing, to the analysis our product marketing team has done, to what the accounts people are learning from their customers,” says Kavazovic. “A ton of market intelligence can bubble up from within a company if you take the time to do it.”
From there, the market requirements document goes to the product leadership to determine what’s feasible and compatible with the technology stack as it stands. “Meshing those two things together is a judgment call by the product leads, and is a bit of an art, but the result is a well-baked draft of a product vision.”
Still, they’re not done yet. This vision roadmap then undergoes no fewer than two rounds of review and feedback, first by the leadership team and then by the entire company. “This seems like a lot of work, and it is. But the benefit of casting a wide net, of getting everybody’s input in a very methodical way, is that — by the time you come out on the other end — you have a product vision and a strategy that everybody understands and finds exciting and motivating,” says Kavazovic.
The Development Roadmap
“This one is pretty much entirely the purview of product managers and engineers,” he says. “They do the hard work of disaggregating and figuring out, based on a whole slew of factors, which set of features are the most optimal to build next and how you’re going to get it done.”
At Opower, there was a week-long planning process every quarter led by tech leads and PMs to map out the next few iterations with their scrum teams. The development roadmap lived in engineering-oriented systems like JIRA, accompanied by a more accessible, higher-level document published to the rest of the company.
At Flatiron, the team named this deliverable the “transparent roadmap,” and its purpose is to guide the operations of various other functions. This includes informing key customers who may be waiting for a particular feature, giving the marketing team new content for an upcoming campaign, or allowing the customer success team to inform existing customers of upcoming product changes. It’s also an important check-in against progress on the overall strategy and product vision.”
These two documents are obviously linked, but importantly, they’re distinct. “Over time, after you get through three or four of your shorter-term development roadmaps, you should find yourself on your way to realizing the 2-year vision,” says Kavazovic. “At Opower, we found — contrary to some anti-long-term planning rhetoric out there — that we were able to deliver better than 80% of the functionality in the original vision with lower than a 20% error margin on estimated time and budget.” The key is to leave enough flexibility in your product vision to accommodate inevitable shifts and feature-level scope adjustments as you work out the details of your development roadmap.
Communicate, internally and to customers, that your vision roadmap is directional.
“I’ve found that most customers are very receptive to things changing over time, even when they work at a stodgy company like a 100-year-old, extremely risk-averse electric utility,” he says. “They intuitively get that a lot of the details may change over 18 months.”
Give Your Customers the Benefit of the Doubt
Many startups are, understandably, apprehensive about sharing what they (supposedly) have in store for a year or two out, even in broad strokes. After all, what if a customer gets attached to a feature from a mock-up that never comes to fruition? What if you change your minds? Isn’t it safer to say nothing at all?
“Actually, the opposite is true, I’ve found,” says Kavazovic. “The majority of customers totally get that things change, that priorities may change. More importantly, they understand that we may discover better solutions.”
If this kind of transparency is uncomfortable, he suggests a couple of paradigm shifts that will set you up for productive customer-facing interactions:
It’s never too early to hear what your customers think. Ideally, your product marketing organization starts serving as the customer’s proxy during the strategic planning phase. “Usually they take the lead in documenting what the customer wants to see,” says Kavazovic. “They weigh what your big customers want, what the competition is doing, and so on. It’s a heavy lift, but when you deliver a quality market requirements document, a lot of that should be baked in.”
Once you have a roadmap you feel good about, be open and share it with a broader set of customers. See what resonates. Revisit what doesn’t. You won’t let the air out of your plans. Instead, you’ll make sure you have a winning strategy. “Even better, you may be able to get some customers to sign up before it’s built — this can be very positive for your cashflow if you’re a startup, and perhaps more importantly, it can help you pre-empt a competitor.”
Customer education never ends. As you move through the long-term roadmap, you may deviate from a development or feature that a customer has come to expect. When you do, by all means explain that — and teach them why the new approach is better.
Kavazovic recalls one incident at Opower, when a customer had grown quite attached to a spinning pie chart feature that was highlighted prominently in a demo of their product vision. “They kept asking our account team when it was coming.” But the UX team decided to scrap it based on “really, really bad” user testing.
“UX was very nervous about presenting this — they were our biggest customer. But they did an incredible job preparing the deck, which featured all the recent research they’d done. That meeting was one of the most successful, slam-dunk client meetings I’ve ever been in,” he says. “The customer was not upset, and on the contrary, was floored by the level of research and the data that we brought to the table. We candidly explained what we’d learned through our agile process, and we described how we got from our original plan to where we ended up — and why that was much more aligned with what their users wanted. They forgot all about the spinning pie chart in less than an hour.”
At the end of the day, your customers care most about achieving their business objectives. So stay focused on their business case — that’s where alignment is really necessary.
Turn Planning Into a High-Performance Team Sport
At Flatiron, Kavazovic and his colleagues took the democratic aspect of cross-functional strategic planning pretty far. “Each of those cross-functional teams met to come up with what they thought the strategy should be for their particular product line, an 18-month vision that they then presented to the leadership team. We decided to live broadcast all of the presentations to the entire company,” he says.
This was a somewhat controversial idea. But come presentation day, Flatiron’s 80-person conference room was packed with over 100 employees — with many more dialed in online. From 8 a.m. to 6 p.m., presentations took place and voices from across the company chimed in on each team’s strategic vision.
“The feedback that day was overwhelmingly positive,” says Kavazovic. “For the people who were presenting, it was great visibility, and an important opportunity to get the company excited about what they were working on.” While some members of the leadership team had initially feared this would be an unruly free-for-all, it turned into a forcing function for the company’s best thinking.
The benefits of including your team in vision building are multiplicative. The people building your product feel engaged with their work, and the people talking about it can do so with confidence and authority.
Moreover, rallying everyone around strategy becomes a great equalizer. “Once a company gets to a certain size, the most important management challenge becomes ensuring that all those people are rowing in the same direction,” Kavazovic says. “When you include everybody in this high-level planning and product vision process, you all know what you’re moving toward and can get to work immediately.”
from First Round Review http://firstround.com/review/dear-pms-its-time-to-rethink-agile-at-enterprise-startups/?utm_medium=rss&utm_source=frr_feed&utm_campaign=home_stream&utm_content=RSSLink
The types of design research every designer should know NOW
In UX design, research is a fundamental part in solving relevant problems and/or narrowing down to the “right” problem users face. A designer’s job is to understand their users, which means going beyond their initial assumptions to put themselves in another persons shoes in order to create products that respond to a human need.
Good research doesn’t just end with good data; it ends with good design and functionality users love, want and need.
Design research is often overlooked in that designers emphasize the result of how a design looks. This results in having a surface level understanding of the people they design for. Having this mindset goes against what UX is all about; being user centered.
UX design is centered around research to understand the needs of people and how the kind of products/services we build will help them.
Here are some research methods every designer should know on the top of their head when going into a project, and even if they are not the ones doing research, they can communicate better with UX researchers to drive engagement in the industry.
Primary
Primary research is essentially coming up with new data to understand who you are designing for and what you would potentially plan on designing. It allows us to validate our ideas with our users and design more meaningful solutions for them. Designers typically gather this type of data through interviews with individuals or through small groups, surveys, or questionnaires.
It is important to understand what you want to research before going out of your way to find people as well as the kind/quality of data you want to gather. In an article from the University of Surrey, the author points out two important points to address when conducting primary research; validity and practicality.
The validity of data refers to the truth that it tells about the subject or phenomenon being studied. It is possible for data to be reliable without being valid.
The practicalities of the research needs to be carefully considered when developing the research design, for instance:
– cost and budget
– time and scale
– size of sample
Bryman in Social Research Methods (2001) identifies four types of validity which can influence your findings:
Measurement validity or construct validity: whether a measure being used really measures what it claims.
i.e. do statistics regarding church attendance really measure the strength of religious beliefs?
2. Internal validity: refers to causality and whether a conclusion of the research or theory developed is a true reflection of the causes.
i.e. is it a true cause that being unemployed causes crime or are there other explanations?
3. External validity: considers whether the results of a particular piece of research can be generalised to other groups.
i.e. if one form of community development approach works in this region, will it necessarily have the same impact in another location?
4.Ecological validity: considers whether ‘…social scientific findings are appropriate to people’s everyday natural setting’ (Bryman, 2001)
i.e. if a situation is being observed in a false setting, how may that influence people’s behavior?
Secondary
Secondary research is using existing data such as internet, books, or articles to support your design choices and the context behind your design. Secondary research is also used as a way to further validate user insights from primary research and create a stronger case for an overall design. Typically secondary research is already summarized insights of existing research.
It is okay to use only secondary research to assess your design, but if you have time, I would definitely recommend doing primary research along with secondary research to really get a sense of who you are designing for and gather insights that are more relevant and compelling than existing data. When you collect user data that is specific to your design, it will generate better insights and a better product.
Evaluative
Evaluative research is assessing a specific problem to ensure usability and ground it in the wants, needs, and desires of real people. One way to do an evaluative study is by having an user use your product and provide questions or tasks for them to think out loud when they try to complete a task. There are two types of evaluative studies; summative and formative.
Summative evaluation- Summative evaluation seeks to understand the outcomes or effects of something. It is more emphasized on the outcome than the process.
Summative evaluation can assess things such as:
Finance: Effect in terms of cost, savings, profit and so on.
Impact: Broad effect, both positive and negative, including depth, spread and time effects.
Outcomes: Whether desired or unwanted effects are achieved.
Secondary analysis: Analysis of existing data to derive additional information.
Meta-analysis: Integrating results of multiple studies.
Formative evaluation- Formative evaluation is used to help strengthen or improve the person or thing being tested.
Formative evaluation can assess things such as:
Implementation: Monitoring success of a process or project.
Needs: Looking at such as type and level of need.
Potential: The ability of using information for formative purpose.
Connecting pieces of data together and making sense of it is part of the exploratory research process
Exploratory research is conducting research around a topic where little or none is known about it. The purpose of an exploratory study is to gain a deep understanding and familiarity of the topic by immersing yourself as much as you can in order to create a direction for how this data could be potentially used in the future.
With exploratory research, you have the opportunity to gain new insights and create worthwhile solutions for bigger issues more meaningful than what already exists.
Exploratory research allows us to confirm our assumptions on a topic that would often be overlooked (i.e. prisoners, homeless) by providing an opportunity to generate new ideas and development for existing problems/opportunities.
Based on an article from Lynn University, exploratory research tells us that:
Design is a useful approach for gaining background information on a particular topic.
Exploratory research is flexible and can address research questions of all types (what, why, how).
Provides an opportunity to define new terms and clarify existing concepts.
Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
Exploratory studies help establish research priorities.
Autism Empathy Tools by Heeju KIm (RCA). Allowing the wearer to experience first-hand what it’s like for people with autism to see, hear and speak.
Generative research is about taking the research you have conducted and being able to use those insights to decide on which problem you want to solve and create solutions for it. These solutions are generally new or an improvement from an existing problem.
Because generative research is more or less the opportunity/solution creating stage, you must understand your users wants, needs and goals beforehand. Generative research allows us to observe a user’s nuanced behaviors in a natural environment which would can be understood more through ethnography, contextual interviews, focus groups, and data mining.
What is the difference between market research and design research?
You can market to users what they said they wanted but market research can’t tell you about solving problems customers can’t conceive are solvable (Eric Schmidt and Jonathan Rosenberg)
The main difference between market research and design research is that design research is more fluid, intuitive. This means data is based on how people feel and simply through our human nature to connect with others in order to come to an understanding that drives change. The motive behind design research is about getting as close to connecting with another person in order to develop value for their goals. Market research is often based on logic and a need for a company to scope out their competition, but along with design research, both can be used in conjunction to design better user experiences through connecting with users and understanding them.
Conclusion
So why is design research so important? Design research allows us to understand complex human behavior by getting to the root of a problem by understanding a user’s needs, wants and goals. It also grounds us in what exactly shapes a user’s experience to help us solve for their top pain points. Overall, the data we collect through design research allows us to make decisions. This results in applying that data into useful applications that drive us to create products that are relevant, accessible and applicable for users and the people we work with, whether it be with stakeholders, product managers or other designers on a team.
If you have questions or just want to chat, feel free to connect and message me on Linkedin 🙂
A computer’s ability to predict a patient’s lifespan simply by looking at images of their organs is a step closer to becoming a reality, thanks to new research led by the University of Adelaide.
The research, now published in the Nature journal Scientific Reports, has implications for the early diagnosis of serious illness, and medical intervention.
Researchers from the University’s School of Public Health and School of Computer Science, along with Australian and international collaborators, used artificial intelligence to analyse the medical imaging of 48 patients’ chests. This computer-based analysis was able to predict which patients would die within five years, with 69% accuracy — comparable to ‘manual’ predictions by clinicians.
This is the first study of its kind using medical images and artificial intelligence.
“Predicting the future of a patient is useful because it may enable doctors to tailor treatments to the individual,” says lead author Dr Luke Oakden-Rayner, a radiologist and PhD student with the University of Adelaide’s School of Public Health.
“The accurate assessment of biological age and the prediction of a patient’s longevity has so far been limited by doctors’ inability to look inside the body and measure the health of each organ.
“Our research has investigated the use of ‘deep learning’, a technique where computer systems can learn how to understand and analyse images.
“Although for this study only a small sample of patients was used, our research suggests that the computer has learnt to recognise the complex imaging appearances of diseases, something that requires extensive training for human experts,” Dr Oakden-Rayner says.
While the researchers could not identify exactly what the computer system was seeing in the images to make its predictions, the most confident predictions were made for patients with severe chronic diseases such as emphysema and congestive heart failure.
“Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns,” Dr Oakden-Rayner says.
“Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions.”
The researchers hope to apply the same techniques to predict other important medical conditions, such as the onset of heart attacks.
The next stage of their research involves analysing tens of thousands of patient images.
On HBO’s Silicon Valley, startups promise to “change the world” by tackling silly, often non-existent problems. But this season, the show’s characters are tackling a project that really could. In their latest pivot, Richard Hendricks and the Pied Piper gang are trying to create new internet that cuts out intermediaries like Facebook, Google, and the fictional Hooli. Their idea: use a peer-to-peer network built atop every smartphone on the planet, effectively rendering huge data centers full of servers unnecessary.
“If we could do it we could build a completely decentralized version of our current internet,” Hendricks says. “With no firewalls, no tolls, no government regulation, no spying, information would be totally free in every sense of the word.”
But wait: Isn’t the internet already a decentralized network that no one owns? In theory, yes. But in practice, a small number of enormous companies control or at least mediate so much of the internet. Sure, anyone can publish whatever they want to the web. But without Facebook and Google, will anyone be able to find it? Amazon, meanwhile, controls not just the web’s biggest online store but a cloud computing service so large and important that when part of it went offline briefly earlier this year, the internet itself seemed to go down. Similarly, when hackers attacked the lesser-known company Dyn–now owned by tech giant Oracle–last year, large swaths of the internet came crashing down with it. Meanwhile, a handful of telecommunications giants, including Comcast, Charter, and Verizon, control the market for internet access and have the technical capability to block you from accessing particular sites or apps. In some countries, a single state-owned telco controls internet access completely.
Given those very non-utopian realities, people in the real world are also hard at work trying to rebuild the internet in a way that comes closer to the decentralized ideal. They’re still pretty far from Richard’s utopian vision, but it’s already possible to do some of what he describes. Still, it’s not enough to just cut out today’s internet power players. You also need to build a new internet that people will actually want to use.
Storage Everywhere
On the show, Richard’s plan stems from the realization that just about everyone carries around a smartphone with hundreds of times more computing power than the machines that sent humans to the moon. What’s more, those phones are just sitting in people’s pockets doing nothing for most of the day. Richard proposes to use his fictional compression technology—his big innovation from season one—to free up extra space on people’s phones. In exchange for using the app, users would agree to share some of the space they free up with Pied Piper, who will then resell it to companies for far less than they currently pay giants like Amazon.
The closest thing to what’s what’s described on Silicon Valley might be Storj, a decentralized cloud storage company. Much like Pied Piper, Storj has built a network of people who sell their unused storage capacity. If you want to buy space on the Storj network, you upload your files and the company splits them up into smaller pieces, encrypts them so that no one but you can read your data, and then distributes those pieces across its network.
“You control your own encryption keys so we have no access to the data,” says co-founder John Quinn. “We have no knowledge of what is being stored.”
Also like Pied Piper, Storj bills itself as safer than traditional storage systems, because your files will reside on multiple computers throughout the world. Quinn says that in order to lose a file, 21 out of 40 of the computers hosting it would have to go offline.
Storj proves that the Silicon Valley‘s basic idea is feasible. But unlike Pied Piper, Storj doesn’t rely on smartphones. “Phones don’t have much storage and the network capability isn’t great, so the show’s idea is a little fanciful,” says Quinn. Someday, 5G wireless networks might make phones a more viable part of the Storj network. If Richard’s compression algorithm was real, those smaller files will help too. But for now, the Storj network relies on primarily on servers, laptops, and desktop computers. The reality is less grand than the HBO fantasy.
IPFS
As interesting as Storj is, it’s not quite what Richard actually described in his pitch. Storj is a storage service, not a whole new internet. A more ambitious project called IPFS (short for “Interplanetary File System”) is probably a bit closer to Richard’s grand vision of a censorship-resistant internet with privacy features built right in.
The idea behind IPFS is to have web browsers store copies of the pages they visit and then do double-duty as web servers. That way, if the original server disappears, the people who visited the page can still share it with the world. Publishers get improved resilience, and readers get to help support the content they care about. With encryption a part of the protocol, criminals and spies can’t in theory see what you’re looking at. Eventually, the IPFS team and a gaggle of other groups hope to make it possible to build interactive apps along the lines of Facebook that don’t require any centralized servers to run.
You need to build a new internet that people will actually want to use.
But the idea of a building a censorship-proof internet by backing copies up throughout the internet isn’t without its potential problems. Sometimes publishers want to remove old content. IPFS creator Juan Benet told us last year that the project is trying to work out ways to let publishers “recall” pages that are being shared. But that idea is also fraught. What’s to stop a government censor from using the recall feature? What happens if someone creates a version that ignores recalls?
Then there are moral and legal risks. Tools like Storj and the venerable peer-to-peer sharing system Freenet make it impossible to know just what content you’re storing for other people, which means you could be playing host to, say, child pornography. Quinn says that the Storj team is currently working on ways to block known problem users. But it won’t be able to completely guarantee that none of its hosts will end up storing illegal content.
IPFS gets around this largely by letting people decide which of the content they’ve visited they actually want to share. But this means that less popular content, even if it’s perfectly legal and ethical, might end up disappearing if too few people share it. Benet and company are working on system called Filecoin that, not unlike Storj, would compensate people for providing access.
Even overcoming these trade-offs inherent in decentralization, people may still not want to use these apps. Storj may be able to win over businesses by being cheaper, but even if it is more reliable, the idea of storing data on random machines scattered across the internet instead of in a traditional data center sounds risky compared to, say, the massively robust AWS, backed by Amazon’s technical know-how and billions of dollars. Convincing people to use decentralized alternatives to Facebook and Twitter has proven to be a notoriously difficult problem. Getting people to use what amounts to a whole new version of the web could be even harder.
Mesh
Even if IPFS, Storj, or one of the countless other decentralized platforms out there do win people over, they’re still technically riding atop the existing internet infrastructure controlled by a shrinking number of telcos. Silicon Valley hasn’t addressed this problem yet. But what if you could chain the smart phones and laptops of the world together using WiFi and Bluetooth to create a wireless network that was free and open to everyone, with no need for Big Telecom?
Australian computer scientist Paul Gardner-Stephen tried to do something like that after the Haiti earthquake in 2010. “Mobile phones have the capability to run autonomous networks, it’s just that no one had implemented it,” he says. Gardner-Stephen helped build Serval, a decentralized messaging app that can spread texts in a peer-to-peer fashion without the need for a traditional telco carrier. But he quickly realized, as the Pied Piper team likely will, that trying to turn people’s mobile phones into servers drains their batteries too quickly to be practical. Today, the Serval team relies on solar powered base stations to relay messages.
Serval and similar apps like Firechat aren’t meant to replace the internet, just provide communications during disasters or in remote locations. But the idea of creating decentralized wireless networks—mesh networks—still has merit. One such network, Wlan Slovenija, for example, now covers all of Slovenia and is spreading to neighboring countries. But these mesh networks are still along way from replacing telcos–especially in the US. Even as wireless base stations improve, they can’t quite yet compete with the fiber optic cables that link the nation’s telco infrastructure on speed and reliability, and some community networks, such as Guifi in Spain, are bolstering their wireless connections with fiber.
Even then, given a choice, would people really pick a decentralized option over the status quo? Customer service at big broadband companies may be bad to non-existent, but you can still call someone. For those who would nevertheless prefer to wrest control of the internet from large corporations, these new alternatives will need to be better and faster than the services they hope to displace. Simply being decentralized isn’t enough. It wasn’t so long ago that people questioned whether people would ever take to the internet itself at all. As the season finale approaches, Pied Piper will find out whether its version of a new internet works—and whether anyone wants it. They just have to build it and see—just like in the real world.