How AI Helps The Intelligence Community Find Needles In The Haystack

If you were an analyst in the American intelligence community covering terrorism issues related to Russia, that conclusion wouldn’t have interested you. What would have stopped you in your tracks, though, was that the next day the Russian consulate tweeted that the attack may have been terrorism involving Russians.

In the flood of news coverage of the attack and the countless other developments in the world, you might have missed the terrorism theory. After all, humans can only process so much information manually. But if you’d been using a new visualization tool from the artificial intelligence startup Primer, you would seen the intense interest in Russian media about the Little Rock attack. Though that theory eventually proved to be incorrect, you’d at least have been aware of the possibility, and been able to make some decisions about what you wanted to report to your superiors.

Today, Primer is coming out of stealth. The 35-person startup, which recently closed a $14.7 million Series A round of funding, has developed a machine learning system that is able to quickly search through tens of millions of data sources–news articles, academic papers, social media posts, and so on–to surface the kinds of information that is essential to both intelligence analysts and corporate analysts alike. The system is also capable of delivering the most salient data points in natural language that approaches the level of what a human analyst might write.

Primer’s initial customers are In-Q-Tel, a nonprofit venture capital firm that invests in companies developing technologies useful to the CIA and other intelligence agencies; Walmart, and Singapore’s $100 billion sovereign wealth fund, GIC.

“For us, the big goal is to build a technology that can read and write, and help us understand the world,” said Primer CEO Sean Gourley, “as it becomes increasingly volatile, uncertain, and complex.”

Primer’s pitch to corporate and government clients is its technology’s ability to sift through immense quantities of data. According to IDC, the total amount of data produced globally will grow from 16.1 zettabytes (a trillion gigabytes) in 2016 to 163 zettabytes by 2025. Of that, 5.2 zettabytes will be subject to data analysis by 2025, 50 times more than last year. Artificial intelligence systems are expected to touch about 1.4 zettabytes by 2025, 100 times more than last year.

Those are big numbers, and Primer believes its technology can help its customers find the meaningful information that lives at the far end of the long tail, things that might be hidden in, as Gourley puts it, the seventh paragraph on page 163 of an obscure report. A human analyst might never have the time in her day to uncover something so deep in the stack, but Primer’s system is meant to highlight it if matches the analyst’s interests–and do so in a way that’s easy to digest and pass on to others.

Part of what Primer is unveiling today is its visualization tool–built by a former New York Times infographic artist. As shown, the tool shows the evolution, over time, of tens of millions of Russian- and English-language articles on a map of the world, with color-coded geographical hotspots indicating media interest in topics related to terrorism. That’s how an American intelligence analyst might have first noticed Russian interest in the Little Rock attack.

Similarly, the tool can easily show how, in September 2016, there were a significant number of articles written in Beslan, Russia, about a new documentary film that reports on a massive hostage crisis that took place there in 2004. Or how last January, Russian media was obsessed with coverage of a court case in The Hague brought by the Ukrainian government alleging support by Moscow for terrorism in Crimea.

The visualization tool–which Primer clients can use to search for numerous topics of interest–is meant to be advanced day by day, which allows analysts to see, at a glance, where the daily hotspots of interest were in both Russian and English media. In essence, it’s about being able to quickly triage the most important pieces of information in a very messy mountain of unstructured data.

“Primer’s technology is poised to revolutionize the way our Intelligence Community Partners consume and prioritize information, helping them to identify emerging areas of interest from real-time data,” said George Hoyem, the managing partner of Investments at In-Q-Tel, in a statement provided to Fast Company by Primer. “Many enterprise companies claim they’re leveraging artificial intelligence to generate insights from unstructured content—but Primer is one of the few actually delivering on that promise.”

The same strategy applies to corporations, Gourley says. Although he wouldn’t spell out exactly how Walmart is using Primer’s tools, he explained that, for example, the retail giant might have an internal analyst covering the snacks or beverages businesses. That person would likely be spending part of every day searching for data on organic beef, or looking for consumer trends, new behaviors, or health studies that could help produce internal reports.

As with those in the intelligence world, these corporate analysts face an overwhelming amount of data on a daily basis, and Primer believes its tools can give its customers a leg up on those who rely on more manual approaches to sifting through the innumerable sources of information.

The system is also meant to be quick. Depending on how much computing power a customer wants to devote to it, Primer’s tools can generate reports in between five to 30 minutes.

Ultimately, Gourley argues, customers using Primer’s technology will find themselves with more time to devote to following hunches–something they may not have been able to do previously because they’d have had to be heads-down trying to digest the never-ending flood of data.

“As we automate, it starts to save time,” he says, “and then allows humans to say, ‘What happens if we look at this?’ We get curious, and we do that in ways that are cheap for machines, but expensive for humans. We like humans telling us stories, but we also like machines telling us stories at scale we can’t.”

from Fast Company https://www.fastcompany.com/40484861/how-ai-helps-the-intelligence-community-find-needles-in-the-haystack?partner=feedburner&utm_source=feedburner&utm_medium=feed&utm_campaign=feedburner+fastcompany&utm_content=feedburner

Google’s artificial intelligence computer ‘no longer constrained by … – Fox News

The computer that stunned humanity by beating the best mortal players at a strategy board game requiring “intuition” has become even smarter, its creators claim.

Even more startling, the updated version of AlphaGo is entirely self-taught — a major step towards the rise of machines that achieve superhuman abilities “with no human input”, they reported in the science journal Nature.

Dubbed AlphaGo Zero, the Artificial Intelligence (AI) system learnt by itself, within days, to master the ancient Chinese board game known as “Go” — said to be the most complex two-person challenge ever invented.

It came up with its own, novel moves to eclipse all the Go acumen humans have acquired over thousands of years.

After just three days of self-training it was put to the ultimate test against AlphaGo, its forerunner which previously dethroned the top human champs.

AlphaGo Zero won by 100 games to zero.

“AlphaGo Zero not only rediscovered the common patterns and openings that humans tend to play … it ultimately discarded them in preference for its own variants which humans don’t even know about or play at the moment,” said AlphaGo lead researcher David Silver.

The 3000-year-old Chinese game played with black and white stones on a board has more move configurations possible than there are atoms in the Universe.

AlphaGo made world headlines with its shock 4-1 victory in March 2016 over 18-time Go champion Lee Se-Dol, one of the game’s all-time masters.

Lee’s defeat showed that AI was progressing faster than widely thought, said experts at the time who called for rules to make sure powerful AI always remains completely under human control.

In May this year, an updated AlphaGo Master program beat world Number One Ke Jie in three matches out of three.

NOT CONSTRAINED BY HUMANS

Unlike its predecessors which trained on data from thousands of human games before practising by playing against itself, AlphaGo Zero did not learn from humans, or by playing against them, according to researchers at DeepMind, the Google-owned British artificial intelligence (AI) company developing the system.

“All previous versions of AlphaGo … were told: ‘Well, in this position the human expert played this particular move, and in this other position the human expert played here’,” Silver said in a video explaining the advance.

AlphaGo Zero skipped this step.

Instead, it was programmed to respond to reward — a positive point for a win versus a negative point for a loss.

Starting with just the rules of Go and no instructions, the system learnt the game, devised strategy and improved as it competed against itself — starting with “completely random play” to figure out how the reward is earned. This is a trial-and-error process known as “reinforcement learning”.

Unlike its predecessors, AlphaGo Zero “is no longer constrained by the limits of human knowledge,” Silver and DeepMind CEO Demis Hassabis wrote in a blog.

Amazingly, AlphaGo Zero used a single machine — a human brain-mimicking “neural network” — compared to the multiple-machine “brain” that beat Lee.

It had four data processing units compared to AlphaGo’s 48, and played 4.9 million training games over three days compared to 30 million over several months.

BEGINNING OF THE END?

“People tend to assume that machine learning is all about big data and massive amounts of computation but actually what we saw with AlphaGo Zero is that algorithms matter much more,” said Silver.

The findings suggested that AI based on reinforcement learning performed better than those that rely on human expertise, Satinder Singh of the University of Michigan wrote in a commentary also carried by Nature.

“However, this is not the beginning of any end because AlphaGo Zero, like all other successful AI so far, is extremely limited in what it knows and in what it can do compared with humans and even other animals,” he said.

AlphaGo Zero’s ability to learn on its own “might appear creepily autonomous”, added Anders Sandberg of the Future of Humanity Institute at Oxford University.

But there was an important difference, he told AFP, “between the general-purpose smarts humans have and the specialised smarts” of computer software.

“What DeepMind has demonstrated over the past years is that one can make software that can be turned into experts in different domains … but it does not become generally intelligent,” he said.

It was also worth noting that AlphaGo was not programming itself, said Sandberg.

“The clever insights making Zero better was due to humans, not any piece of software suggesting that this approach would be good. I would start to get worried when that happens.”

This story originally appeared in news.com.au.

from artificial intelligence – Google News http://news.google.com/news/url?sa=t&fd=R&ct2=us&usg=AFQjCNEtP0YdyeKqhtFXIz7C5XxJjTmtfA&clid=c3a7d30bb8a4878e06b80cf16b898331&cid=52779644058486&ei=N4HsWbDmLMqKhgGMgqLoDg&url=http://www.foxnews.com/tech/2017/10/20/googles-artificial-intelligence-computer-no-longer-constrained-by-limits-human-knowledge.html

Jury.online wants to replace lawyers with blockchain technology

The old jokes are often the best ones. “What do you call 10,000 lawyers at the bottom of the sea?” “A good start.”

One blockchain technology startup has its sights on sinking at least one category of legal advice: that which handles the small claims court.

Jury.online — which has announced presale of its Jury.online tokens (JOT) today — is using the blockchain to make it easier to settle smaller claims, because it intends to eradicate the hundreds, sometimes thousands, of dollars spent on the lawyers needed to win or defend these cases.

This isn’t, of course, the first time that emerging technologies have been able to step in and succeed at replacing lawyers. The DoNotPay bot has already squashed hundreds of thousands of parking tickets. That solution uses a chatbot as its basis, whereas Jury.online is using blockchain technology to provide complete transparency throughout the process and resolve issues using smart contracts.

What is it about blockchain technology that makes it suitable for small claims?

“It’s a competing marketplace,” Alexander Shevtsov, founder and main developer at Jury.online, told me. “You can hire cheap labor to decide simple cases that require no expertise. Most cases can be handled with common sense rather than a deep understanding of law.”

An example Shevtsov quotes is the case of a business that paid for translation service, only to have the freelancer not complete the job.

So how does it work?

Users are connected with randomly selected jurors who make legal decisions and deliver judgment regarding any kind of dispute.

Deals are then executed via smart contracts and jurors are paid via the Jury.online currency, the JOT.

Users make deals via a transparent and secure decentralized platform. Both parties in a deal define the contract and deposit their funds. The funds remain in the contract until a legal dispute arises.

So is this really a solution that can replace lawyers?

“It will not replace lawyers for complicated cases, but it is a much cheaper alternative for independent dispute resolution,” Shevtsov said. “Lawyers are charging hundreds of dollars an hour, whereas Jury.online can handle a similar case for $100-200. That means disputes centered around a $1,000 deal become possible.”

That’s important, because the smallest of the small claims often go unchallenged. The costs of litigation are simply too high for the affected party to take the case forward.

Is Jury.online only targeted at small claims, or does it have applications in larger cases?

“Bigger claims may also be handled if they require expertise in certain pools,” Shevtsov said. “For example, in remote software development, an expert jury pool may be trusted to draw an independent decision. It may be a much cheaper alternative to resolution to international courts.”

The token associated with this blockchain solution, JOT, is expected to be listed on all the main cryptocurrency exchanges. All transactions in Jury.online will be paid using JOTs, such as opening a new dispute or paying the jury.

This is yet another interesting use of blockchain technology. Does Shevtsov think there is any limit to what this emerging field can be applied to?

“Certainly there is a limit to the problems that blockchain technology can solve,” Shevtsov said. “But blockchain seems to have applications in many industries, markets, and departments — especially in government services. Ethereum created smart contracts that can solidify financial deals. Jury online makes it possible to solidify real world deals.”

Despite the potential of Jury.online, Shevtsov is realistic about the future.

“Jury.online, of course, can’t solve all the problems, and cannot guarantee fair trial in 100 percent of cases,” Shevtsov said. “But the protocol and jury service will give you a cheaper and more independent dispute resolution.”

Presale of the JOT token starts today, with details available at the Jury.online website.

from VentureBeat https://venturebeat.com/2017/10/23/jury-online-wants-to-replace-lawyers-with-blockchain-technology/

From Pixel to Big Picture

Growing your influence at work

I got the privilege to deliver this talk (adapted as a blog post) at the National Digital Design Conference (www.nd2c.com) in Pakistan on Sept 23rd 2017. ND2C was the largest design conference in the country so far, and included renowned international speakers, like Stefan Sagmeister and Debbie Millman, and amazingly talented local designers and creatives. It was the vision of two fearless women, executed through them and their team. Shout to the Featured Women and our TED coach Soness Stevens who helped me bring my first conference talk (and now first Medium post) to life, as well as the friends who helped edit it.

I remember my first few days at Google. I was so excited. A month after this picture was taken, I had gained 7 kgs because of all the free food in the cafeteria.

I used to take these Google buses to go to work, that had this cool interior and free wifi. But I made sure to sit in the back so I didn’t look like an awkward tourist.

But, as the novelty wore off, I started to doubt how well I was doing. I thought to myself, “I’ve trained in engineering, not design. A year ago, I didn’t even know that those gray boxy diagrams were called wireframes. Did I just do well in the interviews somehow?”

And then, it got worse: the woman who used to sit next to me got fired. I began to wonder if a girl like me from Rawalpindi really belonged at a place like Google, in Silicon Valley.

That same bus became a place where I would quietly put my head down and cry. I felt like an imposter.

Many people feel this way in new situations, but I think designers are particularly prone to it. Many around us don’t understand this new field. And our colleagues sometimes use us as a service to churn out pixels. We are often self-taught, and unsure if designers can truly lead companies.

Sure…we know Steve Jobs used design to make Apple successful. But he was a rare legend, right?

What about these successful companies?

Many successful companies have had founders with design backgrounds

Did you know they all had founders who were designers by trade? These leaders recognized that as technology becomes more accessible, consumers demand better experiences. They used their design skills to set their product apart.

It’s been 4 years since I started at Google. I have observed many designers in leadership positions and have grown from that self-doubting newbie to a Senior Interaction Designer. I’ve found one main difference between how junior and senior designers approach problems. I want to share that with you so it can help you navigate the relatively uncharted path of a design career.

Our education systems often train us to become specialized experts in narrow fields. We are encouraged to be hyper-focused on a single, discrete problem. However, in an interdisciplinary field like design, we need to balance this focus with breadth. We need to prevent ourselves from getting so distracted by perfecting pixels that we forget to widen our vision and see the big picture.

So, what is it that makes some designers great?

It’s system-level thinking.

Fast Co. design describes this as:

“A mindset — a way of seeing and talking about reality that recognizes the interrelatedness of things.”

Let me explain what this means and why it’s important, through an analogy: my husband and I recently moved to our first apartment together. The first thing we had to buy was a bed. How might you approach that problem? Take a few seconds to think about it.

I might think to myself, “Ooo mid-century modern is really trending these days,” and buy a new king bed in that style.

But, lets assume I’m a lil more thoughtful. I discuss the budget with my husband and realize a king bed will be too expensive. It’s more economical to keep his existing queen-sized mattress.

So I go ahead and buy a reasonably priced, queen-size bed instead.

Now the bed arrives. Oh nooo, I didn’t measure the door! The headboard won’t fit!

And when the bedside tables my husband ordered arrive, they are in a dark bachelor-pad style, and don’t fit with my mid-century modern theme.

I should have focused not just on buying a trendy bed but on making the whole house feel cohesive and meeting both our needs.

This included things like:

  • Our financial goals: how we want to divide our overall expenses
  • Physical constraints: the space of other items
  • An agreed aesthetic: what should the look and feel be

Had I considered this ahead of time, I might have bought a reasonably priced queen-sized bed, with a smaller headboard, and agreed on an industrial style that met both our tastes.

Hopefully that gives you a sense about what it means to think about interrelated things at a system-level.

Now, let’s switch gears and see how we can widen our vision when we build products.

Say you have been hired by a food delivery startup. The CEO asks you to update the following screen so users can buy drinks and fries when they are buying a burger.

How would you approach this problem? Take a few seconds to think about it.

One option is to recommend drinks and fries in a carousel at the bottom. Users can tap on items to add them their cart

But is there a better way to do this than what we were told? Lets step back.

The screen before this one has a list of options.

It might better, for example, to add a combo meal here so we don’t have to encourage users to add items later. This is probably also easier to implement than building a new carousel component.

But why were we asked to allow the user to add a drink or fries in the first place? Why is this important to the business? It’s to increase sales. But is there a better way to do that in the app?

If we look at our app overall, it allows users to go from point A, of wanting to buy burgers, to point B, completing their purchase, through various flows.

It turns out 90% of our users never finish buying what they put in their cart and abandon on this checkout page.

So, instead of just focusing on getting larger orders, we can suggest ways of simplifying our checkout page. From this vantage point, we can even call out any inconsistencies in UI patterns used here relative to the rest of the app.

Now we are getting more users from A to B. But sales are still low. Turns out there aren’t many users at point A in the first place because most people aren’t using our app. Let’s look at the whole market.

Maybe our competitors have a really cool feature that we should build as well or we need to partner with burger joints to get more customers.

In order to solve the real problem in the most efficient way, we resisted the temptation to focus on the screen we were given and explored pain points at flows and at the product level.

Think beyond the features and screens, to the whole system, including user flows and other products in the market.

It can be overwhelming to think about how rest of the system impacts your goals but ignoring it doesn’t protect you either. So, let me share three lessons that have helped me:

1. First, get your first win

Do focus on the basics and become an expert at the tools, processes, and methodologies that are core to your job. And then create something, however small, that brings value to others. For me it was launching my first small project. When I received a bonus from my manager, I finally accepted that he didn’t actually think I sucked. Gaining that confidence freed me to think bigger.

2. It is also your job.

I often hear people say, “That’s not my job,” and that inhibits them from understanding how their work relates to that of others on the team. So number two is…yes, it is also your job.

  • Put on the business hat: How can your designs improve the company’s success metrics?
  • Put on the technical hat: What is easy or difficult to build in the given timeline?
  • Put on the UX hat: Are there existing patterns or research that you should leverage?

Its no wonder then, that great designers are excellent communicators. You can’t just show your designs, you have to articulate the tradeoffs between a good experience and the business or technical constraints. Invite coworkers to observe research or design sprints so they can also understand the factors that led to the solution. Accept feedback and discuss a way forward. Especially when our peers may not understand UX, we need to speak their language.

3. Be curious.

Learn beyond expectations, even if you can’t see the immediate value. Train your mind to wander productively. When you find a few minutes to spare, open up industry news, read a medium post, or dabble in a new tool.

10 years ago when the first iPhone came out, those who quickly adapted their designs for smartphones gained a competitive advantage.

In that vein, let’s go back to the food delivery experience from earlier. We were reacting to the picture that exists today. But what if we were curious about the picture we can create?

Given that same task today at Google, I might design an experience that starts on my phone while I’m leaving work, continues through notifications in my car, and ends on an assistive device in my living room — all through voice conversations.

Being curious about emerging technologies, can help us re-design our earlier food delivery experience as voice interactions across various devices and contexts.

Technology changes very rapidly: an experience that was impossible a month ago, might be within reach today. By being curious about emerging skills you can keep yourself and your companies relevant.

I’m currently on the Google Assistant team, partly because of a project I helped out with, even though I didn’t know much about conversation design at the time.

It’s said:

“Luck is what happens when preparation meets opportunity.”

I suspect like many of you, I would not be standing here as a designer today, had it not been for curiosity. As I mentioned, I was pursuing engineering in college. By taking photos for a newspaper and creating posters for events, I inadvertently taught myself the adobe suite. By the time I discovered design as a career, I was already on a path towards it. Curiosity helped me prepare for the opportunity I couldn’t yet see.

So try these three things.

1. Get your first win

2. It’s also your job

3. Be curious

As you adopt this system-level thinking and increase your influence, embrace any self-doubt as growing pains. Remind yourself that you are taking agency of your own career path, against expectations. And like those leaders we look up to, you are improving product strategy through design.

Also don’t forget that by perfecting your own craft and individual products, you are improving the perception of design as an industry around you.

So next time you’re on a project ask yourself, am I focused on just the pixels or can I see the big-picture?

Thank you.

As each of us improves their individual craft and products, we help improve the perception of design as an industry in our communities (this is map of Pakistan, where the conference took place).


From Pixel to Big Picture was originally published in uxdesign.cc on Medium, where people are continuing the conversation by highlighting and responding to this story.

from uxdesign.cc – Medium https://uxdesign.cc/from-pixel-to-big-picture-c573ddaf971e?source=rss—-138adf9c44c—4

This Is How The Way You Read Impacts Your Memory And Productivity

It’s no understatement that digital mediums have taken over every aspect of our lives. We check what our friends are doing on the glowing screens in our hands, read books on dedicated e-readers, and communicate with customers and clients primarily through email. Yet for all the benefits digital mediums have provided us, there has been a growing body of evidence over the past several years that the brain prefers analog mediums.

Studies have shown that taking notes by longhand will help you remember important meeting points better than tapping notes out on your laptop or smartphone. The reason for that could be that “writing stimulates an area of the brain called the RAS (reticular activating system), which filters and brings clarity to the fore the information we’re focusing on,” according to Maud Purcell, a psychotherapist and journaling expert. If that’s the case, and the analog pen really is mightier than the phone, it’s no wonder some of my colleagues have ditched smartphones for paper planners.

But it’s not just recording our thoughts on an analog medium that appears to be better for us. Absorbing information from analog mediums now appear to be better for memory retention, and thus, productivity. In a study conducted by Anne Mangen, PhD, a professor at the Reading Center at the University of Stavanger, Norway, the researcher gave participants the same 28-page mystery story to read either on an Amazon Kindle or in print format. After the participants read the story, they were asked a number of questions about the text.

“We found that those who had read the print pocketbook gave more correct responses to questions having to do with time, temporality, and chronology (e.g., when did something happen in the text? For how long did something last?) than those who had read on a Kindle,” Mangen says. “And when participants were asked to sort 14 events in the correct order, those who had read on paper were better at this than those who had read on the Kindle.”

While this event has yet to be fully investigated and understood by scientists, Mangen, who now chairs E-READ, a European research network of interdisciplinary scholars and scientists researching the effects and implications of digitization on reading, says one explanation for the benefit of reading analog books may come down to something called metacomprehension deficit. “Metacomprehension refers to how well we are ‘in touch with,’ literally speaking, our own comprehension while reading,” says Mangen. “For instance, how much time do you spend reading a text in order to understand it well enough to solve a task afterwards?”

One study revealed that people think they are better at comprehending information when they read it on a digital screen. This resulted in those readers reading the text much faster than those reading the text in paper format. Yet despite spending less time reading the text, the digital readers predicted they would perform better on a quiz about the text than the people who read the text on paper. Yet when the digital and paper groups were tested, the paper groups outperformed the digital groups on memory recall and comprehension of the text. They also were closer to their test result predictions than the digital group was.

You Don’t Need To Print Off Every Email You Get

Books are one thing, but does our brain absorb information better if we read from other physical mediums, like newspapers and magazines? Not necessarily.

“Length does indeed seem to be a central issue, and closely related to length are a number of other dimensions of a text, e.g., structure and layout. Is the content presented in such a way that it is required that you keep in mind several occurrences/text places at the same time?” says Mangen. In other words, she says, complexity and information density may play a role in the importance of the medium providing the text.

“It may be that for certain types of text or literary genres (for example, page turners), medium does not matter much, whereas for other genres (cognitively and emotionally complex novels, for instance), medium may make a difference to comprehension or to the reading experience. But this remains to be tested empirically.”

In other words, unless people are sending you novel-length emails (which they shouldn’t be), you don’t need to go rushing to the print button, as reading short snippets of information on a screen probably doesn’t hinder memory retention or comprehension.

Print And Digital Can Coexist Peacefully

With all things regarding the brain and human cognition, Mangen also stresses that it wouldn’t be correct to proclaim that information gleaned from print is always going to be just as good, if not better, for memory and comprehension than digital.

“It is not–and should not be–a question of either/or, but of using the most appropriate medium in a given situation, and for a given material/content and purpose of reading,” she says, and notes that a “good starting point is to keep in mind that all media/technologies (old as well as new) have distinct user interfaces, and that the user interface of paper in some circumstances and for some purposes may support key aspects of reading (retention of complex information) or of study (writing notes in the margins) better than digital devices do.”

But for other purposes of reading, for example, presentations with audiovisual material, Mangen concedes a digital device like a tablet is obviously far superior. “There is no one-format/medium-fits-all solution (not even with respect to emails), but it will depend on a number of factors pertaining to the content/text, the reader, the purpose of the reading, the situation, etc.,” she says.

Slow Down When You Read Digitally

If you can’t bear to give up digital books, you aren’t out of luck. As the study cited above mentions, like other digital readers, you probably think you are absorbing the information better than you actually are, and thus move through the book faster.

A simple solution to this is to simply slow down and take more time reading the material, and you might absorb the information just as well as those who naturally take longer to read a paper book.

from Co.Labs https://www.fastcompany.com/40476984/this-is-how-the-way-you-read-impacts-your-memory-and-productivity?partner=feedburner&utm_source=feedburner&utm_medium=feed&utm_campaign=feedburner+fastcompany&utm_content=feedburner

Can blockchain decentralize the internet?

An increasing number of people want to fix the flaws of the internet by decentralizing it, including Sir Tim Berners-Lee, the father of the world wide web, Mozilla Foundation, the nonprofit organization that supports the Firefox browser and other open-source tools, and Richard Hendricks, the protagonist of HBO’s Silicon Valley.

But what’s wrong with the current internet, and isn’t it already decentralized?

The internet is physically decentralized; no single entity owns it. But large, centralized services support its critical components such as web hosting, cloud computing, DNS services, social media, search engines, email services, and more. These services rely on resources concentrated in a limited number of physical or virtual servers. This approach makes it more convenient for companies to keep their services maintained.

But the same centralized architecture has created problems. If the servers of these entities go down, we lose access to vital functionality. If they get hacked, we lose our data. If they decide to monetize our data in unlawful ways or hand it over to government agencies, we likely won’t learn about it. If they decide to censor or prioritize content based on their interests, we won’t be able to do anything about it.

In short, we’ve entrusted these entities with too much power, and they’ve become too big to fail.

In a fully decentralized internet, instead of one or a few organizations running the system, a community of users and a network of independent machines would own and power these vital services. This would make them more resilient to failures and hacks while ensuring no single entity can use them in nefarious ways.

Many experts believe blockchain, the technology that is already decentralizing monetary transactions among other things, is the key to solving the decentralized internet puzzle. At its core, blockchain is a distributed ledger that enables a large number of parties to share resources and information without having to trust each other or any central broker. Several companies are now employing the technology to create decentralized versions of vital internet services.

Decentralized web hosting

Distributed denial of service (DDoS) attacks have become a favorite tool for cybercriminals who want to shut down websites. And they’re relatively easy to stage on centralized systems. All an attacker has to do is gather enough firepower and direct it at the web servers that are hosting the target website. The target has to increase its computing resources in order to stay online, an endeavor that is costing hosting companies hundreds of millions of dollars every year.

Blockchain-based platforms fend off DDoS attacks by replacing centralized servers with thousands of nodes, each of which serves a part of the website. Consequently, attackers have no single target to hit.

An example is Gladius, a blockchain startup creating a decentralized content delivery network (CDN) and DDoS mitigation system. Gladius uses the blockchain to distribute files and assets across  thousands of computers that share its network. When users sign up with the Gladius network, they can rent out their computer’s idle time, storage, and bandwidth to host websites and receive cryptocurrency in exchange.

Gladius uses self-executing “smart” contracts that run on the blockchain itself to administer and allocate the resources of the network and manage payments. There are several benefits to Gladius’ model. First, it removes centralized storage locations, making DDoS attacks much harder and reducing the costs of hosting websites. Second, it can speed up access to websites by bringing cached content much closer to visitors. And third, it creates an incentive for users to share their idle network and computing resources.

Decentralized DNS

Nebulis, a project that employs blockchain and the Interplanetary Filesystem (IPFS), a distributed alternative to centralized web servers, aims to create a decentralized domain name system (DNS). DNS services are critical to enabling users and businesses to access web services. When you type the domain name of a website into your browser, a DNS server translates that name into an internet address (IP address) and helps you connect to the host.

Last year, an attack against the servers of Dyn, a large provider of DNS services, caused a major internet outage across large regions of the U.S. and Europe and cut off users from vital services such as PayPal, Github, and the App Store.

Nebulis expects to prevent this kind of failure by storing, updating, and resolving domain records on the Ethereum blockchain. As a result, hackers won’t be able to disrupt DNS services by targeting their servers. A distributed DNS on the blockchain would also make it exponentially more difficult to stage man-in-the-middle attacks or practice censorship and domain redirection by manipulating DNS records.

Decentralized data storage

To use today’s centralized internet services, you’re forced to trust them with your data. But many things can go wrong under this model. One example is the 2013 Yahoo data breach, which gave hackers access to the data of more than 3 billion user accounts. More recently, a data breach at credit reporting agency Equifax gave away sensitive information belonging to more than 143 million people. In both cases, hackers had obtained access to the servers of the breached companies.

But hackers aren’t the only people you need to fear. Companies that hold your data can mine it for their own business purposes, share it with government agencies, or sell it to third parties all without your consent.

One of the benefits of blockchain is that it lets you use applications while retaining ownership of your data. By storing data across a distributed network, blockchain applications obviate the need for centralized storage and ensure that only the true owner will be able to access it.

Storj, a decentralized version of Google Drive and Dropbox, uses blockchain to split files into smaller bits, encrypt them, and distribute them across the many nodes participating in its network. People who make their computer’s storage space available to the network receive cryptocurrency rewards. Storj solves two specific problems. First, it makes sure that your files aren’t stored in a central location, where a service provider or a potential cybercriminal can gain access to them. And second, it speeds up file access speed by letting users download their files piecemeal from several locations at once.

CryptaMail and John McAfee’s SwiftMail apply the same pattern to sending and receiving emails. The services encrypt messages and store them on the blockchain. Only the receiver of the message has the decryption keys. Distributed email services prevent wholesale theft of user data.

And decentralized social networks such as onG.social and Indorse store their data on the blockchain and run their services through smart contracts, making it impossible for the service provider to invade user privacy. Users get to choose how they share their data with the network and receive cryptocurrency rewards for adding value to the network.

Putting the pieces together

A fully decentralized internet will have its own challenges, but its key promise will be to provide robust services that can’t be compromised or owned by any single organization, a distributed network that gives its users full control of their digital life and ensures privacy and equal access. If such an internet is destined to arrive, blockchain will surely be one of its principal building blocks.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems. He writes about technology, business and politics.

from VentureBeat https://venturebeat.com/2017/10/08/can-blockchain-decentralize-the-internet/

Automatically Invest Your Spare Change in Cryptocurrency With This App

Image credit: Pixabay

If you’ve always wanted to own some cryptocurrency, a new app might be a good way to get your hands on some. Called Coinflash, the app takes the spare change left over from your purchases during the week and uses that cash to invest in the cryptocurrency of your choice through a Coinbase account.

The app itself doesn’t actually take any money from you. Instead, it counts up your spare change from credit and debit transactions (you can choose to connect one account or several, but all of the accounts you select need to be associated with the same bank), and then gives that info to Coinbase that actually handles the transaction.

Advertisement

It works a lot like Acorns does. That app rounds up purchases and invests your change in the stock market.

If linking accounts concerns you (it should), linking cards just gives the app the ability to read transactional data, not make purchases. Coinflash says that information is stored for 2 months in its database, but not longer. Investments can be set up to be done on a weekly or monthly basis.

Image credit: Coinflash

To use the service, you pay Coinflash $1 per month in Ethereum or Bitcoin, no matter how much money you end up investing. The $1 is deducted from your Coinbase account. If you stop using Coinflash, then it will stop deducting the $1, even if you keep investing and using Coinbase.

Advertisement

If you’ve been meaning to get into cryptocurrency, then it could be an easy way to dip your toes in the water without putting up tons of cash at once.

from Lifehacker, tips and downloads for getting things done https://lifehacker.com/automatically-invest-your-spare-change-in-cryptocurrenc-1819028364

Designers Aren’t Prepared To Make AI–Here’s How To Get Ready

AI can offer amazing user experiences when it’s well designed–from more effective spam filters to digital assistants that understand the nuances of your voice. But according to interaction designer and Carnegie Mellon professor John Zimmerman, many UX designers are utterly unprepared to design this new wave of AI-centered interfaces.

Zimmerman has designed intelligent systems for 20 years–everything from a TV recommendation system for Philips to a system designed to sense depression and an interface for an algorithm that helps cardiologists decide whether or not to perform heart surgery. He believes there’s a gaping chasm between AI and UX. “UX designers right now go out and do a bunch of field work, but they fail to see opportunities where machine learning can add value,” he said at the Google People + AI Research Symposium in Cambridge, Massachusetts, this month.

There are many reasons for that. For one, the technology itself is very complex, limiting designers’ ability to play with it or even gain a tacit understanding of it. Secondly, machine learning isn’t part of a standard design education. Nor is it included in many mainstream design tools. Zimmerman believes this has led to a gap in UX designers’ skill sets. Yet it’s important that designers think of AI as just another tool in their toolbox–a material to be used responsibly and ethically. 

[Source Image: panimoni/iStock]

Where’s The Machine Learning Playground?

“Design generally evolves new ideas through a conversation with materials, where you develop a tacit understanding of the material’s capabilities,” Zimmerman tells Co.Design. “This is very hard for designers to do with software, but it’s particularly hard with machine learning.”

Zimmerman points to Ray and Charles Eames’s furniture breakthroughs to demonstrate how designers need to play with and dissect materials to fully understand them. The duo’s material of choice was plywood–they were so obsessed with it that they made it themselves. “Through that process they came up with this entirely new idea for furniture,” he says. “They found a super inexpensive way to manufacture furniture that had a very different look. But it came from playing with the material. Traditionally we train design students by sending them to the shop and studio, to cut paper and play with plastic. We don’t have a machine learning shop.”

In other words, it’s really hard for designers to experiment with machine learning because the technical barriers to entry are still so high. The Eameses didn’t need to be chemical engineers to play around with plywood–but to play with machine learning, you often need a deep understanding of math, data, and statistics.

Some companies, Google included, are working on this problem by trying to create programs that automate the behind-the-scenes process of building a machine learning model, which can require a PhD to complete. But they’re not yet complete, and they could ultimately mean ceding design decisions to the company that built them.

[Source Image: panimoni/iStock]

Think Different

But in the meantime, Zimmerman thinks a shift in mind-set is in order. One of the most obvious examples of how machine learning could be used in UX is through adaptivity, or products that learn how a person uses them and then change to accommodate that person. For instance, an adaptive, machine learning-powered UI would learn if you always use the Starbucks app to pay for your coffee and automatically pull up that screen when you’re inside a Starbucks. Companies like Zappos could choose to fill in your shoe size when you’re shopping on the website, or only show you styles in your size. Adaptability isn’t thought of as a standard design element yet, though.

Zimmerman is the first to admit that thinking about how a system learns over time is not part of his own personal design process. For instance, he’s working on a crowdsourced transit mapping project called Tiramisu, which was not initially designed to learn from users’ behavior. Zimmerman says the idea didn’t even occur to him until a new PhD student on the team asked him about it. “It was a super obvious question,” he says. “It made me think about how we don’t even think about [making something adaptive] when we’re doing sketches and wireframing. I’ve been reading about this since the mid ’90s. But it’s not even in the mind-set to say, is this [interface] going to learn?”

Part of the problem is that designers don’t necessarily think about adaptability. But there are also myriad new challenges that come with building machine learning into products. For instance, one of the central concepts Zimmerman is currently thinking through is the idea of an “undo” button. Take the Zappos example. What if you’re shopping for shoes for someone other than yourself, and the site is only showing you shoes in your size? There needs to be a way out–an “undo”–so users can get back to a generic version of the interface. How to achieve that effectively is an open question facing UX designers today.

[Source Image: panimoni/iStock]

Bringing AI To Education And Design Tools

Zimmerman believes there are ways to help designers prepare for the future. The first lies in design education. “The lowest bar is to simply give students assignments where they need to design an intelligent system and a part of their design is to envision how the intelligence is working below the hood and how the user is going to interact with that,” he says. “That’s the simplest thing: to let them know, people are going to be asking this of you.”

He also proposes curricula that pair design students with data science students, so they get the opportunity to work with someone making models and have the opportunity to experience the kinds of problems machine learning can solve. It’s a plus for data science students, too–they get a lens into the design process and understand how the work they’re doing affects real people.

Outside of education, Zimmerman thinks common design software itself isn’t offering designers the tools they need to prototype and plan adaptive interfaces. “They certainly encourage you to think about navigation. They scaffold you in all kinds of ways,” Zimmerman says. “Why are we not also embedding in those tools the very simplest issues of learning and adaptation to people’s repeated use?”

While the concept of “interfaces that learn” isn’t new, the fact that it hasn’t broken into mainstream design tools means it isn’t really part of designers’ tool kits yet. “There’s a huge opportunity for Sketch, Adobe, or whoever who wants to be the next big tool, to support designers in thinking this through,” Zimmerman says.

Are designers as far behind as Zimmerman thinks? And how can we close the gap between AI and UX? Let us know what you think by emailing us at CoDTips@fastcompany.com.

from WebdesignerNews https://www.fastcodesign.com/90145027/designers-arent-equipped-to-make-ai-heres-how-to-prepare