John Deere is slowly becoming one of the world’s most important AI companies


John Deere has been in business for nearly 200 years. For those in the agriculture industry, the company that makes green tractors is as well-known as Santa Claus, McDonald’s, or John Wayne.

Heck, even city folk who’ve never seen a tractor that wasn’t on a television screen know John Deere. The company’s so popular even celebrities such as Ashton Kutcher and George Clooney have been known to rock a Deere hat.

What most outsiders don’t know is that John Deere’s not so much a farming vehicle manufacturer these days as it is an agricultural technology company. And, judging by how things are going in 2022, we’re predicting it’ll be a full-on artificial intelligence company within the next 15 years.

Watch out Silicon Valley

John Deere’s been heavily-invested in robotics and autonomy for decades. Back in the late 1990s, the company acquired GPS startup NavCon in hopes of building satellite-directed guidance systems for tractors.

Within a few years, JD was able to develop a system that was accurate to a few centimeters — previous GPS systems could be off by as much as several meters.

The company then partnered with none other than NASA to create the world’s first internet-based GPS tracking system.

In other words: the path to modern autonomous vehicles was sowed and tilled by John Deer tractors and NASA decades ago.

Automation is par for the course at John Deere. The company has numerous tractors, vehicles, and other smart equipment that offer features ranging from hands-free driver assistance to autonomous weed identification and eradication.

But the recent shift to autonomy has the company positioned to be an important fixture in the AI technology sector.

I spoke to John Deere’s VP of autonomy and automation, Jorge Heraud, to find out exactly what we could expect from the first name in self-driving tractors in the future.

Heraud explained that John Deere approached artificial intelligence technology from two angles:

There’s automation, making machines achieve superhuman performance. And autonomy, making vehicles driverless.

Farmers have long relied on Deere’s automation technologies — the superhuman machines Heraud mentioned.

As an example of this kind of tech, Heraud mentioned AI-powered systems to identify and cull weeds in real time.

It sounds like a simple problem, but nobody wants to spray an entire field with pesticide. And sending humans out to weed thousands of acres by hand is incredibly cost and labor-prohibitive.

But, if your farm vehicles can handle weeding in real-time while they’re doing other tasks autonomously, such as tilling, then farmers save time and labor while simultaneously increasing outputs for current and future crops.

On the other side, is autonomy and John Deere’s fascinating driverless tractors.

Behold the 8R:

According to Heraud, this is unlike any other AI-powered tractor the company produces:

This is super exciting. It’s not just driver assistance … the farmer can go and do something else. They can leave the cab and operate everything from an app.

Where previous AI-assistance systems would handle turns and keep rows straight, farmers were still needed in the cab to deal with obstacles, mud, and other little surprises that may crop up.

Now, the 8R can find its way to a field and operate entirely on its own without a human in sight. If it does come across something it doesn’t know how to handle, it alerts the human who can then direct it to avoid the object or come handle the situation personally if need be.

Beyond robo-tractors

Currently the fully-autonomous 8R is set up for tilling, a labor-intensive facet of farming that’s crucial to a farm’s harvest.

Heraud told me that, specifically for corn and soy, farmers across the US often struggle to find labor during peak harvesting months.

In the wake of COVID-19 and pandemic-related travel restrictions, seasonal and migratory workers have been harder to find and farmers are struggling to maintain their fields.

Heraud described the issue as critical. He says the 8R can be a game-changer at harvest time, especially when it comes to getting fields ready for the next planting.

But the company’s ambitions don’t end there. The “future of farming,” as John Deere calls it, will apparently be a robotic revolution.

Deere currently has plans to automate nearly everything on a farm that has a motor — if the company sells it, there’s a good chance they’re trying to turn it into an autonomous robot.

When I asked Heraud if the “fully-automated farm of the future” was something that would arrive within the next decade or two, he replied:

It’ll be here before the end of the decade.

from The Next Web https://thenextweb.com/news/john-deere-slowly-becoming-one-worlds-most-important-ai-companies

Web 2.0 vs web 3.0: how are they different?

The web is constantly evolving. So what might the next chapter of the internet look like? In 2022, it seems we might have an answer to this question—it’s called web 3.0.

Web 3.0 is a relatively new concept which started to gain traction in 2021. It represents the next phase of the internet, which is expected to be a more open, democratized way to access information. Large venture capitalists, like Andreessen Horowitz see tremendous potential in web 3.0, and are pouring billions into the field. Worldwide interest in the term "Web3" reached an all-time high on Google in December 2021. But despite all the buzz that web 3.0 has in the press, there is still a lot of confusion about what web3 actually is.

This article will discuss the evolution of the web, explain the difference between web 2.0 and web 3.0, and share some web 3.0 design recommendations.

A brief history of evolution of web

Much like human history, the history of the internet is defined by eras: namely, web 1.0, web 2.0, and soon to come, web 3.0. Those eras are represented by various technologies and formats.

Web 1.0 refers to the first stage of web evolution. This era lasted from 1991 until the early 2000s. Web 1.0 websites were mostly a bunch of static pages that didn’t have much functionality for interaction with the content: all users could do is consume information on the page passively. One easy way to think about the web of that era is like a giant Wikipedia, and the individual pages of this online encyclopedia as websites.

From an aesthetic point of view, web 1.0 websites had a relatively simple design—page layouts closely resembled text documents, and underlined blue links were the primary interactive element on those pages.

A screenshot of the Microsoft homepage in 1998. Image: https://web.archive.org/A screenshot of the Microsoft homepage in 1998. Image: https://web.archive.org/

Microsoft homepage (circa 1998). Image: https://web.archive.org/

The web 2.0 era spans from 2004 to the present. This is the version of the internet most of us know today. It also changed our perception of the web. Web 2.0 was built around the idea of the web as a platform: Web 2.0 websites were no longer static pages; they basically became web apps. Web 2.0 pages have a lot of interactive elements that could hardly be imagined in the era of web 1.0—dynamic page layouts adapted to different screens and resolutions, interactive data validation in forms, and even embedded videos.

Web 2.0 also changed the way we work with content—it’s when social media boomed. For the first time, site visitors had an option to consume content or create content themselves. Social networks made it possible to allow user-generated content to be viewed by millions of people around the world. There’s a reason why web 2.0 is known as the era of social media.

Advances in hardware design during this period also popularized smartphones for the first time, so users also had an option to choose the web browsing device they wanted to use, making mobile internet access and social networks the two driving factors of web 2.0.

A screenshot of a Facebook profile page (circa 2014). Screenshot: https://time.com/11740/facebook-10-year-anniversary-interfaces/A screenshot of a Facebook profile page (circa 2014). Screenshot: https://time.com/11740/facebook-10-year-anniversary-interfaces/

Facebook profile page (circa 2014). Screenshot: https://time.com/11740/facebook-10-year-anniversary-interfaces/

The Web 2.0 era is also known as the era of centralized internet. In web 2.0, data and content are centralized within a small group of companies colloquially known as Big Tech: like Amazon, Google, and Meta, for instance.

The business model of companies like Meta is based on showing ads. Every time we visit a video on YouTube or write a post on Facebook, the companies provide those services in exchange for our personal data. And with the data companies collect from us, it becomes really easy for them to provide targeted ads that suggest products and services based on our interests or online activity. As a result, web 2.0 is often criticized for lack of privacy.

What is web 3.0?

There is no strict definition of what web 3.0 is, so let’s just go with a general introduction to the concept in this section. Web 3.0 represents a movement from company-owned internet platforms to community-owned internet platforms. It is distinct from the original concept of a web, coined by the father of the internet Sir Tim Berners-Lee in the 1990s. In the ‘90s, Berners-Lee mentioned a few essential ideas about the future of WWW, such as:

  • Bottom-up design. “Instead of code being written and controlled by a small group of experts, it was developed in full view of everyone, encouraging maximum participation and experimentation.”

  • Decentralization. “No permission is needed from a central authority to post anything on the web, there is no central controlling node, and so no single point of failure.”

The term "Web3" was coined in 2014 by Gavin Wood, founder of Polkadot and a co-founder of Ethereum. Gavin described web3 as a decentralized online ecosystem based on blockchain. Blockchain solves one of the most painful problems of web 2.0—the problem of a stateless HTTP protocol. Blockchain acts as a native state layer that allows it to hold and transfer users’ states (your history of browsing, favorites, online purchases, and other available data) independent of tech companies. Think of it as a natural extension to web internet protocol that enables users to keep their history and current state without the need to store local cookies (information about your web session). Once a user connects to the internet from a new device, the system will automatically transfer their state.

Solution architecture in web 2.0 vs web 3.0. Image: https://www.theblockresearch.com/Solution architecture in web 2.0 vs web 3.0. Image: https://www.theblockresearch.com/

Solution architecture in web 2.0 vs web 3.0. Image: https://www.theblockresearch.com/

Web 3.0 is a movement towards democratizing the internet. Information in web 3.0 can be stored in multiple locations on the network simultaneously thanks to blockchain, and therefore be decentralized. In web 3.0, every online service will be run by a decentralized autonomous organization (DAO), or member-owned communities without centralized leadership. It means that web 3.0 networks allow participants to interact directly without going through a trusted intermediary, and allow anyone to participate without monetizing their personal data. Web 1.0 didn’t have services, so web 3.0 is more like an evolution of web 2.0 but with more open, and transparent nature of ownership.

Decentralized finances (DeFi), essentially a stack of technologies that allow crypto and blockchain to operate in digital environments, is another crucial part of web 3.0. DeFi allows users to complete financial operations (i.e., send, receive, exchange money) without bank or government involvement, decentering Big Tech companies and financial institutions. So where does revenue come from? The truth is there’s no one correct answer, because so far we don’t have any solid monetization model, apart from NFTs, which we describe below.

Last but not least, web 3.0 will be built around the concept of semantic web envisioned by Berners-Lee in 2001. In Web 3.0, digital systems will be able to understand information just like humans. This will allow us to create systems powered by general artificial intelligence that will imitate how humans learn and think, and will gradually improve their accuracy in everyday tasks such as finding information for users or suggesting a relent product during online shopping. Rapid development of Artificial Intelligence and Machine learning will make it possible to understand the meaning of the content on the web.

Benefits of web3:

  • Better privacy for users. Web3 will provide increased data security because your digital identity is not 100% connected to your real identity. Users’ data are generally anonymized. In comparison with web 2.0, it’s much harder for companies to track you in web 3.0. Since all your content is stored on blockchain, you can consume content (articles, videos) or make purchases without being afraid that companies will trace the real you.

  • It’s not monopolized by big tech companies. Web 2.0 services have a central authority (business owners) that decides who can use them (i.e., they can censor any account at any time). Web3 apps are open to anyone since there are no "gatekeepers" that can prevent users from using apps.

  • Better uptime. Web 2.0 services are based on network infrastructure that can become a point of failure (i.e. web servers that run a service can be out of order due to power outage). Decentralization, on the other hand, means that a network of thousands of computers is used as the backend. Even if a part of the network goes down, the remaining part will be able to keep the web3 app alive. As a result, web3 apps have much lower chances of going down.

Limitations of web 3:

  • Slow integration of web3 functionality in modern web browsers. Browsers just started introducing web3 functionalities like crypto wallets. For now, web3 is less accessible for mass audiences. And apart from digital wallets and NFT collections, we don’t have web3 apps or websites around the web as of yet.

  • Difficulty of regulating a decentralized web. In web 3.0, there is no single controlling authority that can force users to follow their rules, and content created by users is owned by users, not platforms. Which can be both good and bad. The fact that there is no moderator for content can lead to a lack of censorship and the proliferation of harmful content. For the same reason, it will be much harder to prevent cybercrimes.

Key differences between web 2.0 vs web 3.0

Web 3.0 represents a movement from company-owned internet platforms to decentralized community-owned internet platforms.

Now that we’ve covered high-level differences between web 2.0 and web 3.0, it’s essential to list and explain the main differences between the two concepts from a user point of view. This section will help you understand what you, as a designer, is expected to do when creating a web 3.0 solution.

From the design perspective, there are a lot of metaphors and patterns used in web 2.0 that can be reused in web 3.0. Web3.0 is an evolution of the web, not a revolution, so it allows designers to create a familiar experience for users. For example, when designing a financial transaction in web3, it’s possible to use the same system states such as processing state, completion state, etc.

But there are a few new features in web 3.0, and crypto wallet is among web 3.0 main features. Crypto wallets act as mediators between clients and servers; they serve as authentication, payment, and collection tools.

The role of wallet in web 3.0 design. Image: https://www.theblockresearch.com/The role of wallet in web 3.0 design. Image: https://www.theblockresearch.com/

The role of wallet in web 3.0 design. Image: https://www.theblockresearch.com/

Companies that want to support web 3.0 will introduce crypto wallets into their platform. From a UX perspective, it should be easy for users to connect their wallets to any web 3.0 app.

Metamask, one of the most popular crypto wallets available on the market. Screenshot: https://metamask.io/download/Metamask, one of the most popular crypto wallets available on the market. Screenshot: https://metamask.io/download/

Metamask, one of the most popular crypto wallets available on the market. Screenshot: https://metamask.io/download/

The mechanism of paying and owning things in web 3.0 is also different. You use a crypto wallet to pay with cryptocurrency, and all your digital property, such as a Non-fungible token (NFT), is linked to your crypto wallet. NFTs are a type of digital assets that are considered to be unique. NFTs can be any content — texts, photos, images, music, videos.

Opera crypto wallet shows two CryptoKitties (popular collection of digital images of cats) collectibles in the wallet. Image by Opera.Opera crypto wallet shows two CryptoKitties (popular collection of digital images of cats) collectibles in the wallet. Image by Opera.

Opera crypto wallet shows two CryptoKitties (popular collection of digital images of cats) collectibles in the wallet. Image: Opera.

Another notable change is that web 3.0 is backed by cryptography. Encryption is used for securing user identity and data. Despite that web 3.0 apps naturally embrace security solutions, it’s worth highlighting this fact in design to make users feel safe about using your platform. It’s important to give users a sense of security so that they can work without any stress.

Last but not least, there is a lot of technical jargon in the web 3.0 space. "Blockchain," "gas fee," "decentralized apps" are just some of the terms we hear daily when discussing web 3.0. While those terms can be straightforward for technical specialists who work in this domain, they can be unfamiliar to average users. Increasing product accessibility becomes a top priority for web3 designers. When creating your solution, you need to use simple language to communicate with users. For example, when offering Layer 2 (L2) solutions to your users, you can describe it from the point of value it brings.

The web is a living organism that evolves all the time. In 2022, web 3.0 is more like a myth, a promised future internet rather than a solution that can be introduced in the coming years. Beyond niche applications, such as tools for crypto traders and digital art collectors (NFT), we don’t have many web 3.0 apps. Transition to the new era of WWW won’t happen overnight, but, at the same time, the changes that happen right now are granular, and they lead to the direction of the new web. And designers need to be ready to join this route.

from Shaping Design Blog https://www.editorx.com/shaping-design/article/web-2-vs-web-3

Houzz Taps Into AR’s ‘B2B2C’ Opportunity



One of AR’s most opportune and under-exposed strategies is what we call B2B2C. This involves AR software that helps companies better reach or serve their customers. This segment is bigger than you may think, as it includes consumer AR developer tools or enabling tech.

But it also applies to non-tech sectors that use AR to improve products. As we’ve examined, companies like Streem bring AR to home-services pros to help them streamline operations by remotely diagnosing leaky faucets and broken heaters through a customer’s upheld smartphone.

Now, the latest company to tap into this opportunity is Houzz. This week, it launches an AR feature that lets home renovation pros visualize finished projects on site. Using the feature through a smartphone or tablet, they can help homeowners make and visualize design choices on the fly.

Housed in the Houzz Pro app, use cases include visualizing walls, windows, cabinets, and other design elements. The feature is built on Houzz’ 3D floor plans, which have tripled in use over the past year, the company tells AR Insider. An AR front-end now brings them to life.

Case Study: Streem Taps the ‘B2B2C’ Opportunity

Cosmetics to Couches

Panning back, this accomplishes a few things. First, it broadens the functionality of AR visualization. The technology has become popular for visualizing products, including everything from cosmetics to couches. This helps consumers gain more confidence before buying.

That’s been amplified further in a pandemic when social distancing and retail lockdowns bestow even greater value on the ability to gain IRL-like product dimension remotely. For example, in eCommerce, consumers can get a better sense of product fit, style, color, and texture.

Houzz has been a leader in AR visualization for all of the above reasons. But its View in My Room 3D feature focused on individual items like rugs and tables. Its latest development positions AR to tackle entire renovation projects by visualizing final results on a project-wide basis.

“View in My Room 3D […] made it clear that people crave a visual connection to their homes when making decisions.” Houzz co-founder and president Alon Cohen told us. “Now, pros can take [AR] a step further by immersing their clients in the design with a life-sized virtual tour.”

What’s Driving Houzz’ AR Success?

Motivated Adopters

The second thing this accomplishes for Houzz goes back to the B2B2C angle noted above. Going deeper on that principle, it brings AR to consumers through the professionals that serve them. It can be an optimal go-to-market strategy, as these pros are more motivated adopters.

To that end, home service pros can use tools like AR to differentiate themselves. Like with Streem, AR can provide an edge in their marketing and customer interactions, as well as operational efficiencies. For example, they can avoid downstream headaches like redoing work.

In other words, they’re better off with more informed and confident customer choices. This is analogous to AR’s ability to reduce product returns in eCommerce because of a more confident initial purchase. Home renovation pros could benefit from a higher-stakes version of that.

“Pros have let us know that the new feature can minimize the amount of back and forth communication typically involved in the planning stage by answering questions up-front,” said Cohen,” giving their clients confidence to move forward with the project.”

More from AR Insider…

from AR Insider https://arinsider.co/2022/03/30/houzz-taps-into-ars-b2b2c-opportunity/

Top 14 Mobile App Development Trends of 2022 | by inVerita | Geek Culture

If you want to keep up with the times, you need to be conscious of all the latest trends in mobile application development. In this rapidly developing world where technology does not stand still even for a second, and consumer demands drastically increase, the changes in mobile application development are constantly taking place. People nowadays spend a lot of time interacting with mobile applications, thus bringing mobile apps development to the forefront of business needs. The futuristic approach is what enables global companies as well as web developers to perfectly explore the latest trends in mobile apps.

So what are the latest mobile app development trends in 2022? Check them out in-depth.

For sure, one of the latest expected trends in the mobile app development industry is maintaining the 5G standard. This implies the innovation about the next generation of wireless telecommunications networks within the mobile app development production. By 2025, 5G connections will account for 40 percent of all connections in Europe and 15% among all mobile connections on the planet, as GSMA Intelligence report exposed. Including lightning-fast speeds of up to 319 terabytes per second (Tbps), near-zero latency, high connection density, and wide bandwidth, 5G lays the ground for upcoming ideas furthermore mobile application growth. Thus, 5G selection will increase the speed and efficiency of mobile applications. By developing applications for multiple forms of connected devices, this technology will expand the scope of the Internet of Things (IoT). Moreover, this emergence of 5G significantly expands the possibilities for the integration of AR and VR by processing large amounts of data at much faster paces. In addition, 5G technology will revolutionize video streaming by supporting real-time, high-resolution streaming at higher speeds. 5G could also positively impact the development of 3D and immersive augmented reality mobile applications. Impressively, 5G technology improves GPS performance, updates cloud adaptability, and reduces hardware costs. The outstanding performance of 5G provides smart cities with new and unique opportunities for digital integration, enhanced security, improved emergency response, cutting-edge urban infrastructure management, and personalized healthcare.

Source: Research Gate

“Smart thing” or “smart object” technology, also commonly known as IoT, is a burgeoning trend in the app development process and a moment to modernize performance prospects over the next couple of cycles. Facing a recent Market Research Future analysis, the market for IoT software programs will be nearly $6 billion by 2027, up from $3 billion in 2019. Operating as a loose network of interconnected digital endpoints such as consumer devices, networks, servers, and applications, IoT technology collects data in real-time through wireless data transmission using embedded sensors. The most common IoT devices are mobile devices such as smartwatches, fitness trackers, and activity trackers. Today you can find “smart” versions of almost any home appliance, from washing machines to refrigerators to closet mirrors, not to mention smart home technology. Thanks to IoT, we’ve also seen the emergence of “smart cars,” which could be the next step toward fully integrated traffic control systems in the future. IoT can reduce and improve energy efficiency by tracking and managing energy flows. IoT applications are also found throughout the lifecycle of construction and ongoing building maintenance. The key goal of these installations is to increase productivity and efficiency while reducing operating costs. In addition, consider the industrial sector, where the Internet of Things, combined with automation and machine learning, helps organizations stay profitable. And, of course, the security segment, which is one of the most common applications for IoT. Police, firefighters, and paramedics can take advantage of improved access to real-time information because IoT technology is built into their police, fire, and ambulance vehicles.

Virtual reality and augmented reality technologies are emerging mobile app development trends that are set to significantly boost each user experience on Android and iOS in 2022. The global market for virtual reality and augmented reality technologies are assumed to gain from $27 billion in 2018 to $209 billion in 2022, according to Statista. Augmented reality takes advantage of our current environment to place objects in real-time. In contrast, artificial reality creates an entirely new artificial environment. Immersive technology has made it much easier for app developers to add real-world visual layers to phones, especially with the coming of Google’s ARCore and Apple’s ARKit. In fields ranging healthcare, study, marketing, manufacturing, retail to travel, AR and VR technologies provide more immersive user expertise. VR and AR technology in the mobile app development industry can revolutionize retail and e-commerce. This is an area where businesses can find a way to get ahead of their competitors. It is believed that the next level of consumer interaction and user experience will be driven by augmented reality.

Source: The Conversation

Beacons are miniature wireless transmitters that plug in and transmit signals to other nearby smart devices at regular intervals using low-power Bluetooth technology. One of the latest key advances in geolocation and proximity branding technology obtains radar technology, which makes location-based scanning and interaction easier and accurate. Various ways can be used among this technology, it has been used in industries ranging from retail and marketing to agriculture and health care, travel and tourism, hospitality, and agriculture. As Statista, determines, the radar technology capital is growing at an average annual growth rate of 59.8%, moreover, the market value is projected to touch $56.6 billion by 2026. Beacon is increasingly shifting the prospect of transportable user engagement because of its outstanding analytics and targeting capabilities.

Artificial intelligence and machine learning are expanding the potential of single global technology shareholders, especially in mobile development. AI-based assistance is now being developed by global technology giants such as Google and Amazon, and by 2027, the AI industry is predicted to reach $266.92 billion. Chatbots, natural language processing, and biometric identification are just some of the AI and machine learning technologies that are pushing the potential for custom developing mobile apps. By 2022, AI and ML should be capable of much further than chatbots and Siri. Many companies have already begun using AI to develop apps to improve profitability and reduce operating costs in a variety of ways.

What’s more, as reported by IDC, more than 75 percent of workers who use ERP solutions will now use AI capabilities to enhance their skills.

This means that AI and machine learning are not only deeply embedded in today’s business applications, but also have a lot of room for future innovation.

In the future, we may see DevOps automation with artificial intelligence through AIOps, AI-enabled chips, automated machine learning, and furthermore neural network interaction.

Some trending mobile app development technologies these days cover the development of apps for foldable devices. Different displays, or even a mix of OLED technology displays, can be used for numerous folding modes on these devices. Unfolding the gadget to increase the screen size improves usability and expands the user interface due to the ability to transform. This suggests the possibility of using the device as a phone and as a mini-tablet at the same time since the display can be easily unfolded if needed. Living examples of such smartphones are the Samsung Galaxy Fold and Huawei Mate X. Although today foldable devices are only a small part of the overall market share regarding smartphones. Per the Foldable Smartphone Shipment Forecast survey, foldable smartphones will take almost 5% share from respected mobile smartphone companies by 2023. This is a fairly compelling reason for being technically savvy in portable app construction for folding including deploying these devices. Specific technology is a true challenge for the developer, even though it helps to gain a competitive advantage by mastering it. Among the most demanded features of mobile applications for foldable phones is its capability to maintain continuity and smoothness while switching between screens. Moreover, it must support multiple screens, multi-window functionality, and be compatible with both large and small screens.

From unlocking cell phones to banking, finance, and payments, biometric authentication is becoming more common in everyday life, providing an extra layer of security for funds and data.

Mobile biometric verification is a type of multi-factor authentication (MFA) that employs a mobile device for the first component and leverages the device to verify a unique biometric identification as both a second component to validate an individual’s identity. Being a relatively new technology today it is the top mobile app development trend, which typically means identification of users by fingerprints or face identification. However, biometric authentication can also include more sophisticated techniques such as hand geometry, voice recognition, or iris scanning. Using biometric data processing technology, customers can fully evaluate the supplementary level of assurance of their data and mobile wallet. From the prospective courses of biometric authentication, there is a potential implementation of the aforementioned technology in the manufacture of mobile banking, online stores, and healthcare. Future use of such technology could include immigration even services.

Source: Applikey Solutions

An example of the latest mobile app development trends is the usage of GPS in the software. Currently, most of today’s applications implement location-based functions, thereby increasing performance and providing consumers with a highly tailored experience. 95 % of worldwide firms utilize location-based services to attract new patrons, per an analysis by the Location Based Marketing Association (LBMA). This technology is totally applicable to any industry. No wonder that the location-based services market is expected to reach $155.13 billion over 2026, according to research by ResearchandMarkets. Major interests of developing geolocation apps for business accrue the pursuing: accurate and fast service, the ability to tender virtual tours, the opportunity to use the location of users for targeted advertising. It is worth noting that geolocation serves as an excellent marketing tool for reaching the intended audience and additional ways to interact with customers. By using this feature, customers can collect information about the nearest places or objects in a certain location in real-time. Moreover, with the permission of the users themselves, the application helps to know the current location of the users, which contributes to strengthening the position of the brand in the media space. Plus, companies may use a map to collect consumer feedback on specific services, allow customers to add their own material like pictures or text messages to specific areas on the map, and read other people’s evaluations.

As time went on, chatbots evolved and became more sophisticated and popular because of user demand. Chatbots might potentially be deemed a new benchmark in customer service. Over half of shoppers crave further do-it-yourself bespoke service solutions to help them make an online purchase quicker.

Chatbots have become mobile app trends in 2021 and the quantity of interaction for chatbots with mobile apps will progress beyond primitive into ferocious. As bots are powered by AI, their replies have become increasingly human-like. A chatbot is a great solution for serving and supporting your customers. This is a smart way to minimize the expense of customer service.

A chatbot is a great solution for serving and supporting your customers. They automatically qualify potential customers, which helps reduce transaction costs. Chatbots can also automatically filter potential customers into groups by simply asking questions in a natural, conversational manner. What is more, chatbots help personalize the experience, which makes users feel noticed and special, which builds brand trust and increases the likelihood of a purchase.

With the unprecedented emergence of Covid-19 on-demand applications became the new mobile app industry trends. On-demand smartphone applications unite consumers in need of services to service providers and therefore can give immediate answers to challenges. The possibilities of such applications appear to be limitless. The on-demand concept is adopted by a rising number of companies. The on-demand applications market is predicted to produce $335 billion total income by the end of 2025, which itself is 24 significantly greater than even in 2014.

The range of areas where you can implement an on-demand application is quite wide in the contemporary world. It can be a laundry service, doctor on demand, a virtual tutor or trainer and it may also be food delivery, housekeeping maintenance, pet care services, and a beauty salon. Your clients may add additional functionality to their apps by utilizing on-demand possibilities that will improve earnings for everybody.

Source: Monei

The fact that digital wallets are easy and foolproof, makes it an extra mobile app industry trend. In brief, a digital wallet is an online portal where people or businesses can transact their finances electronically without a hitch. Using a mobile wallet you can simply and primarily safely store personal information such as debit and credit card data, passport, or driver’s license for various payment methods. The payment procedure is made faster and easier by attaching common cash gateways with mobile wallets. E-wallets are quickly approaching their goal of being the most advanced payment method with easy registration and login, reliable payment processing functions for merchants and consumers, and a user-friendly control panel.

The Mobile Wallets Report produced jointly with Juniper Research the use of smartphone-based transactions would increase over 74 % over the course of 2025. Fintech companies can afford mobile wallets by contactless payment aptitudes and in-app shopping of tickets, customer loyalty programs, discounts. For example, there are also wallets based on artificial intelligence, based on NFC, biometric wallets, and wallets for cryptocurrency.

Popular platforms include such geeks as PayPal, Google Pay, Amazon Pay, Apple Pay, and Samsung Pay.

Corporate mobile apps hold the latest mobile app trend among massive and medium-sized businesses. The implementation of such technologies permits firms to raise profitability and productivity to effectively manage business processes and more importantly to digitally convert and become competitive in the global marketplace. As shown by Motus: The Mobile Workforce Report, 84 % of businesses have seen a boost in overall productivity as a consequence of the installation of a mobile app installation. According to Forbes Insights and VMWare, the overwhelming (81 %) of over 2,200 corporate participants stated that mobility had a direct influence on the success of the firm. With certain apps, all complicated operations are reduced while data preservation is maintained.

Blockchain technology can confidently be said to be a new trend in mobile applications. With gaining in prominence, this technology has already influenced many areas of life such as healthcare, education, real estate, and finance. Blockchain, based on distributed ledger technology (DLT), provides heightened data protection.

The blockchain system helps achieve transparency and mitigates the risk of any fraudulent transactions or misinformation.

By virtue of its robust and reliable base, blockchain extends the security of mobile applications. Even though the technology itself is quite extensive, the blockchain-based development of mobile applications is relatively easy to implement. All this comes from the open-source nature of the technology and the tools that are available. Today, the technology is utilized to monitor digital assets, give digital identification, offer cloud-based storage space, maintain loyalty as well as reward schemes, deliver inventory control proof of identity, and sometimes even sign documents ownership rights. It’s no surprise that the global market for it is predicted to eclipse $20 billion in 2024.

Source: gbksoft.com

Wearable device apps are undoubtedly a trend in the mobile app industry. Smartwatches, fitness wristbands, and fitness trackers are becoming more and more widespread, which contributes to the growth of this entire market. Just imagine that you can take measurements and provide real-time analysis of vital health indicators. Such facilities have a tremendous force on the medical and sports fields. Via applying artificial intelligence technology, wearable devices are powerful enough to detect diseases in their earliest stages.

With the slow lines of mobile app development trends, you can see people’s real needs as well as their desire and vision for the future. A number of these mobile application development trends are already used in the market. If you intend to run a successful mobile app, it pays to follow the swings of trends in the mobile industry and incorporate the most powerful ones into your final product. Cooperation with a reliable and technologically-savvy vendor will make your digital journey smoother, faster, and more secure.

from Medium https://inveritasoft.com/article-top-14-mobile-app-development-trends-of-2021

How Many Consumers Have Tried Mobile AR?



New Study Reveals 30 Percent of U.S. Adults Have Used Augmented Reality

New data from Thrive Analytics and ARtillery Intelligence uncovers who’s using mobile AR apps, how often, and in what categories.

Thrive Analytics and ARtillery Intelligence have released a new report: AR Usage & Consumer Attitudes, Wave V. ARtillery Intelligence authored survey questions and a narrative report while Thrive Analytics administered the survey through its established survey engine and ongoing Virtual Reality Monitor™ research.

Highlights include the fact that 30 percent of U.S. adults have used mobile augmented reality (AR) at least once. More importantly, they’re using it often: 54 percent of mobile AR users engage at least weekly and 75 percent at least monthly. This is a telling indication of mobile AR’s potential, given that active use is a key mobile app success factor and tied closely to revenue metrics.

The top mobile AR app category today is gaming, followed by social. These are driven by popular AR apps and features, such as Pokémon Go and Snapchat AR lenses. Both categories will continue to lead mobile AR according to ARtillery Intelligence, but others will emerge, such as everyday utilities like visualizing products in one’s space during an e-commerce flow.

“Gaming and social are typically where new consumer technologies germinate,” said ARtillery Intelligence Chief Analyst Mike Boland. “But history tells us that lasting value develops around utilities that solve everyday problems. This is amplified in the post-Covid era as sustained eCommerce inflections are supported by AR’s ability to add dimension to shopping.”

Virtual Reality Monitor applies Thrive Analytics’ acumen and time-tested practices in survey research. The AR survey in this wave (Wave V) included a sample of more than 102,000 U.S. adults. Thrive Analytics and ARtillery will continue to analyze AR & VR market opportunities through the lens of consumer sentiments, including key trends uncovered after several waves.

“AR and VR remain in early-adoption phases,” said Thrive Analytics Managing Partner Jason Peaslee. “But though there are typical challenges and adoption barriers, these technologies will gradually transform the way people work, connect, and learn. We’re committed to quantifying that evolutionary path over the next several years.”

AR Usage & Consumer Attitudes, Wave V

Report Availability

AR Usage & Consumer Attitudes, Wave V is available from ARtillery Intelligence and Thrive Analytics. Access to the source database and additional strategic analysis can be obtained from Thrive Analytics.

About ARtillery Intelligence

ARtillery Intelligence chronicles the evolution of spatial computing, otherwise known as AR and VR. Through writings and multimedia, it provides deep and analytical views into the industry’s biggest players, opportunities and strategies. Products include the AR Insider publication and the ARtillery PRO research subscription. Research includes monthly narrative reports, market-sizing forecasts, consumer survey data, and multimedia, all housed in a robust intelligence vault. Find out more here.

About Thrive Analytics

Thrive Analytics is a leading digital marketing research and customer engagement consulting firm. With clients spanning leading national brands as well as publishers and agencies serving the small business community, it pairs proprietary market research services and data analytical tools with time-tested business insights and methodologies to help organizations measurably improve customer experience, loyalty, and sales results. Its mission is to provide superior research and support services that inspire clients to make smarter decisions. Find out more here.

About Virtual Reality Monitor

The Virtual Reality Monitor™ is Thrive Analytics’ proprietary survey of virtual reality/augmented reality technology users. These surveys, conducted semiannually, track the adoption rates, usage, satisfaction levels, profiles, and many other areas related to VR/AR users. Each wave has a customizable section for client-specific inquiries. Results & key insights are communicated in advisory reports & presentations, charts & infographics, newsletters & articles, and custom data views. Information from these studies is used by marketers, product managers, consultants, and other people working in the technology space.

More from AR Insider…

from AR Insider https://arinsider.co/2022/03/24/how-many-consumers-have-tried-mobile-ar-2/

Jeremy Cai on Italic’s first tipping point

This story was originally published on the Product Hunt blog.

Italic’s model makes manufacturers more money and saves shoppers 50–80% on purchases. Founder Jeremy Cai spoke to us about how the company’s flywheel is gaining momentum and what it took to get there.

Direct to Consumer has been the “it girl” strategy for brands for the last decade. Selling directly to consumers instead of through retailers has enabled brands to increase margins, gain better insights on customers, control their image, and more.

That’s all great (mostly)… for the brands.

Jeremy Cai’s parents started a manufacturing business before he was born, so he was familiar with how manufacturers fit into the buying & selling equation. It usually looks something like this:

Let’s say a scarf manufacturer spends $10 on raw labor or materials (in reality, these numbers would be much larger). Their next step is to sell what they make to a retailer or brand for about $12. That retailer typically makes a 5–10x margin on the sale, meaning it will sell the item to you, the customer for $60 to $120. That’s $108 tacked on to the price as the fabric moved from finished good to your neck, even though the end product — your scarf — was the same when the manufacturer handed it over for $12.

That $108 is where Jeremy saw opportunity — higher margins for the manufacturer, lower costs for you. Enter, Italic.

I spoke to Jeremy, Italic founder and CEO, not long after Italic announced its $26.9M Series B fundraise.

Doing what you know

“Do what you know” is oft provided advice to new entrepreneurs undecided on where to start. Jeremy is a repeat founder. You may know him from his first (and recently flourishing) company, the hiring platform called Fountain (formerly Onboarding IQ). The aforementioned mantra was something that stuck with Jeremy throughout his time running Fountain.

About four years into the startup, he started thinking about what would get him excited again.

“My parents started a manufacturing company 30–40 years ago. You never think you’ll do what your parents do…” he started.

This was around the same time a second crop of direct-to-consumer darlings were having their moment, Jeremy explained. You can identify the first crop by Warby Parker and Bonobos, and the second with Glossier and Away.

“Those brands bypassed the retailer, but the products were equivalent to the brands. Once Google and Facebook became saturated, they became the new retailers. If you talked to manufacturers during this time, they would tell you that it doesn’t matter who the buyer is — the retailer or the D2C brands — they’re all the same. ‘I make my small margin and produce high volume for them, and they get 5–10x what I sold it to them for’ they would say.”

Bringing manufacturers online

Jeremy had also been taking note of startups tackling legacy experiences at the time: Uber on cab-hailing and payments, Airbnb on travel stays, etc. So he started thinking:

“What if we were able to offer the same quality and experience as a brand or a retailer, but adopt a managed marketplace mentality, providing the technology and infrastructure to the manufacturers to sell their goods directly to customers?”

In Italic, manufacturers are merchants — the companies that produce goods for top brands, from Burberry to Le Crueset. These merchants offer the same products that they sell to brands and retailers, but with 50–80% lower prices. The savings comes from cutting out the middleman. Italic doesn’t do much of what the retailers or brands are spending their money on — inventory or product development. Instead, it’s providing merchants the technology they need to go online and sell to consumers.

When I asked Jeremy if the Direct to Consumer movement helped paved the way for Italic, he explained that D2C helped pave the way for people to buy brands online and still have a good experience. In most other ways though, D2C doesn’t matter. Those companies are still just brands.

“Our job has been figuring out how to digitize the supply chain and bring these manufacturers online. We work with about 70 manufacturers or so. 5–10 are publicly listed companies now and the majority of them don’t have websites. That’s how antiquated this industry is. If they’re familiar with eCommerce that’s great; that might make it easier to get them online. But we’ve brought many offline manufacturers online as well.”

Meanwhile, the consumer doesn’t have to know about any of this if they don’t want to.

On B2B2C acquisition: A chicken & egg dilemma

Put all so simply, it sounds like an easy sell for manufacturers: sign up with Italic and double or triple your profit margins. Of course, I had an inkling it was much harder than it seemed.

“It was so hard,” laughed Jeremy.

Manufacturers are often generational, some 3rd or 4th generation-led, he explained. Most make money the same way: They take a deposit from a purchase order, which they put towards material, rents, equipment, labor, etc., and claim the additional profit after the goods are produced (for example, a 20% margin on the production run).

“They’ve always gotten paid soon, or at least after the production run. In our case, we’re saying ‘You’re going to make a run, pay for it, and we’ll pay you when we sell it.’ It does take a leap of faith on their side. The first year, Polo (Sourcing Manager) and I visited over 150 manufacturers in Asia, Europe, and the US. Only two said yes — a leather goods manufacturer and a cashmere scarf manufacturer. They’ve stayed with us this whole time.”

In addition to upfront cost, getting manufacturers on board was also challenging out the gate with a small number of customers. It was a classic chicken and egg problem.

“Italic is a business that benefits from scale. Unit costs are [cheaper] with more volume. The more volume we have, the more leverage we have to pitch and onboard manufacturers. The more manufacturers we get, the more products we can offer. And the more products we have, the more consumers we can acquire. That’s why I think [acquisition] has gotten easier, besides just getting better at the pitch and the technology getting easier to use. A lot of those 150 manufacturers that we first pitched have now become merchants, but it took a long time.”

The lesson here? On the manufacturing side, Italic has made its progress with a simple strategy — “brute force” and hitting the pavement. And on the consumer side…

Finding product-market fit

“I always feel like it’s a moving target,” Jeremy said when I asked him about product-market fit. “Sometimes it feels like you have it and sometimes it’s like “Do I really have it?” A lot of people try to put definitions on it, but the longer I work in tech, the more I feel like you can find product-market fit but then lose just as quickly.”

I asked Jeremy about product-market fit with myself in mind. It feels like I am likely a persona in Italic’s target audience — but brands are undoubtedly persuasive. After all, I’m a Warby Parker, Casper, and Away customer. I scope out the runway sections of TJ Maxx (or T K Maxx in the UK). So why is Italic’s offering so enticing to me?

There are a lot of reasons why I, or anyone, might choose a brand over a generic label: status, design, and quality are the main ones. Italic’s shot at making a serious dent in the market seems to rely mostly on competing against the last two, i.e. selling products of comparative quality and design, but with a better price. There’s also the fact that in modern consumerism, we expect our brands to serve a larger purpose. Italic’s model of championing manufacturers has that going for it too.

The challenge comes in explaining all of it.

“Consumers are more educated today and have resources. They take meaningful time to learn about how Italic is different, but it takes one level of extra effort to explain that you can buy from the manufacturer, and why it’s cheaper,” Jeremy noted on challenges. “We have a lot of tailwinds for us. Private label growth has been the fastest-growing segment of new branded product in the last decade, consumers are more willing to buy products from brands they’ve never heard of, and marketplaces online have exploded in popularity.”

The biggest challenge of all, he shared, is education.

“My opinion so far is that the only way to solve for that is organically and with referrals.”

On marketing something that’s hard to explain

The challenge of educating the public on how Italic works hasn’t stopped the startup from experimenting with advertising. Although Jeremy notes Italic has done very little advertising thus far, its comparative marketing approach has definitely caught some attention.

“Comparison is the easiest way we have found to have our customers understand what we offer. It’s an advertisement, it’s very fast to explain.” And in regards to those brands that might not love the approach? “Our job isn’t to play it safe and be friends with everyone, obviously,” he said. “With the model itself, you’re inevitably going to ruffle a couple feathers here and there but ultimately, if it’s in service of the customer, we’ll do the right thing.”

As for Italic’s referral strategy, Jeremy said they haven’t pushed it much, but attribution data backs up his theory. Post-purchase attribution (i.e. “how did you hear about us” type surveys) continually reveal friends and family as the number source of purchases, and last-touch attribution (i.e. the last touchpoint a user had before they converted) credits organic traffic. In other words, the shopper went straight to Italic to make a purchase (“for a long time we thought that was a bug but we’ve audited it”).

Jeremy notes that referral attribution will likely shift as the startup leans into more aggressive marketing, but so far it’s been the moat of the business.

“Our customers are the best ambassadors and educators because it is admittedly hard to explain the whole thing.”

On changing pricing models

Italic reached a tipping point last year. Since its launch, the marketplace had relied on a membership (pay-to-shop) model to make money while it took a negative margin on products to get more manufacturers on board.

“Going back to economies of scale, as volume grew, we saw a lot of benefits in the unit costs and we started to turn green on those products while maintaining the same prices,” Jeremy explained. So Italic opened up its site to allow anyone to purchase products, enabling more people to try Italic first without having to commit to a membership. Memberships remain but they’re offered as an optional upgrade.

“After a lot of customer interviews and looking at the data, we decided on three ways we’ll make money: Membership fees, product sale commissions (like other marketplaces), and service fees for things like fulfillment, payment processing, and creative services. We want to be the most competitive in the market — the highest quality for the most competitive price to a consumer.”

On progress

Between tailwinds and hitting the pavement, the Italic team has gained enough momentum to look optimistically past its cold start problem.

“We’ve built this playbook and flywheel that are really powerful and improving. On the playbook side, we’re able to launch products with velocity, at a price point that is frankly really hard to achieve with both legacy companies and newcomers. Over the holiday season, we increased our product assortment by 50%, and this year we’ll more than double it. On the flywheel, we plan to acquire more customers, use that to acquire more merchants, and bring more products online.

We’re starting to reap the benefits of a lot of effort that took a long time. I’m really excited about taking more and more of our customers’ lifestyle shopping habits and putting the Italic spin on them.”


Jeremy Cai on Italic’s first tipping point was originally published in Product Hunt on Medium, where people are continuing the conversation by highlighting and responding to this story.

from Product Hunt – Medium https://blog.producthunt.com/jeremy-cai-on-italics-first-tipping-point-21494099b44

Why modular has not clicked in commercial construction


Visions were grand, forecasts ambitious: Permanent modular construction would change the industry, saving time, money and the environment. But years after its initial adoption in the U.S., it accounted for just 5.5% of new construction last year in North America, according to the Modular Building Institute (MBI).

Despite promised cost savings, environmental benefits and quicker return on investment, contractors are still running into roadblocks using modular construction. Many face a steep learning curve, and there can be snafus with design, manufacturing, transportation and assembly, they say.

“On paper, and if all goes according to plan, modular construction should result in both time and cost savings. But modular techniques have not cracked the code fully on commercial, and other more complicated, construction types,” said Raja Ghawi, partner at Era Ventures, a proptech-focused venture capital firm, who previously worked as an investment director at Boston-based Suffolk Construction.

Of course, many construction pros vouch for modular’s potential, especially in certain market sectors.

Optional Caption

Sebastian Obando/Construction Dive, data from the Modular Building Institute

 

“Over the past five years, we doubled market share … my expectation is that the market share will double again to 10% over the next five years for many of the same reasons it’s growing now,” said Tom Hardiman, executive director at MBI. 

“More owners and architects understand the process, tight labor markets, massive housing shortages — none of those factors will change to negatively impact the industry over the next five years,” he said.

Turner Burton, president of Hoar Construction, a Birmingham, Alabama-based construction company, said modular construction is starting to click especially in the healthcare sector. The repetitiveness of identical patients’ rooms or bathroom pods sets up well for a modular operation.

“You have to consider and plan for every step of manufacturing, transport and installation,” said Burton. “You have to have the people who will be installing and putting those pieces together in one room.” 

For example, trade partners were involved in the design phase of a current Hoar project (pictured above) to ensure the fully prefabricated exterior panels could be picked up by a crane and then placed on the building exterior.

Modular has also made inroads in the hospitality industry, with marquee names such as Hilton and Marriott giving the thumbs-up

Contractors also appreciate the green and safety benefits. Around 88% of contractors said modular construction reduced construction waste, according to a 2020 Dodge SmartMarket report

At the same time, construction on a factory floor eliminates the potential for falls, the No. 1 cause of construction deaths. About 89% of respondents indicated modular provides a safer worksite, according to Dodge. 

Modular buildings also take 25% to 50% less time to build than traditional methods, which means faster occupancy and return on investment, according to the MBI.

Why modular hasn’t stacked up so far

Nevertheless, contractors say certain challenges in modular projects have stalled its progress. One of the biggest is the new way of approaching each job.

“Everyone needs to stop and unlearn what they know about building for a moment and relearn how this modular stuff has to be sequenced, staged, orchestrated and coordinated with all the infrastructure,” said Jared Bradley, president of The Bradley Projects, a Nashville, Tennessee-based architecture firm. “That’s just a completely different learning curve.”

Workers build the exterior skin panels for a Hoar Construction modular project in a warehouse.

Permission granted by Hoar Construction

 

Architects, consultants, general contractors, subcontractors need retraining, including how to monitor quality and track progress, said Ghawi. 

Another challenge is the fact that the initial savings can often be wiped out by design, manufacturing or execution errors on site, said Ghawi. That could range from issues with the kits of parts assembly, shipping and handling from the factory or proper installation techniques.

Modular’s pre-assembled components also undercut some subcontractors, notably highly skilled trades like plumbers, electricians and steel workers, said Scott DeLano, principal at Certified Construction Services, a Nashville, Tennessee-based general contractor. 

The modules “are already pre-plumbed, they’re pre-wired, and we are basically just providing foundations, landscaping, grading work and all the infrastructure that has to happen for those things to have a home to sit on when they show up,” DeLano said. 

The methods are so dissimilar and used so infrequently that it can be difficult for contractors and subs to get into a cadence.

“As soon as you finish a modular project, you’re back to the conventional methods for a year or two, or three or four, whatever it is, and then all of a sudden, someone comes up with another modular project,” said Bradley. “And everyone’s got to rethink everything from before.”

Preventing damage during transport is another problem.

“To build and transport a modular bathroom or a modular unit requires additional structure beyond what is required to build in the field,” said a senior executive at a general contractor who asked not to be identified for fear of upsetting potential business partners. “You actually end up putting more material into a prefabricated module than you would if you were just building something efficiently on a jobsite.”

With material prices at a 35-year high, any extra cost is disdained. 

“For prefabricated construction, this is why it is important to establish consistency and repetition, so you can maximize the efficiency of production to offset additional requirements for rigidity and stability in transport,” the senior executive said.

Growth ahead

Despite the hurdles, modular’s market share is expected to grow. Homes built with a modular process use about 17% less material overall, for example, according to MBI. 

Cloud Apartments, a modular housing development company, claims its construction process is 30% more cost effective

And even Bradley, the contractor who mentioned the steep learning curve, still sees a place for it in his business.  

“There is a time and place for everything, and there’s a time and place for modular construction,” Bradley said. “It’s always one tool in our toolbelt. But I don’t think it’s ever going to be our main tool.”

from Construction Dive – Latest News https://www.constructiondive.com/news/why-modular-has-not-clicked-in-commercial-construction/620892/

Google: Alt Text Only A Factor For Image Search via @sejournal, @MattGSouthern

Google’s use of alt text as a ranking factor is limited to image search. It does not add value for regular web search.

This is explained by Google’s Search Advocate John Mueller during the Google Search Central SEO office-hours hangout recorded on March 18.

Mueller fields several questions related to alt text, resulting in a number of takeaways about the impact it has on SEO.

Adding alt attributes to images is recommended from an accessibility standpoint, as it’s helpful for visitors who rely on screen readers.

From an SEO standpoint, alt text is recommended when your goal is to have an image rank in image search.

As Mueller explains, alt text doesn’t add value to a page when it comes to ranking in web search.

Alt Text Is For Image Search

In the question that relates to the title of this article, Mueller is asked if alt text should be used for decorative images.

That’s a judgement call, Mueller says.

From an SEO point of view, the decision to use alt text depends on whether you care about the images showing up in image search.

Google doesn’t see a page as more valuable to web search because it has images with alt text.

When it comes to using alt text in general, Mueller recommends focusing on the accessibility aspect rather than the SEO aspect.

“I think it’s totally up to you. So I can’t speak for the accessibility point of view, so that’s the one angle that is there. But from an SEO point of view the alt text really helps us to understand the image better for image search. And if you don’t care about this image for image search, then that’s fine do whatever you want with it.

That’s something for decorative images, sometimes you just don’t care. For things like stock photos where you know that the same image is on lots of other sites, you don’t care about image search for that. Do whatever you want to do there. I would focus more on the accessibility aspect there rather than the pure SEO aspect.

It’s not the case that we would say a textual webpage has more value because it has images. It’s really just we see the alt text and we apply that to the image, and if someone searches for the image we can use that to better understand the image. It’s not that the webpage in the text web search would rank better because it has an image.”

Hear Mueller’s full response in the video below. Continue reading the next sections for more insights about alt text.

The SEO Impact Of Alt Text

In another question about alt text, Mueller is asked if it’s still worth using alt text when the image itself has text in it.

Mueller recommends avoiding using text in images altogether, but says yes – alt text could still assist in this case.

“I think, ideally, if you have text and images it probably makes sense to have the text directly on the page itself. Nowadays there are lots of ways to creatively display text across a website so I wouldn’t necessarily try to use text in images and then use the alt text as a way to help with that. I think the alt text is a great way to help with that, but ideally it’s better to avoid having text in images.”

The question goes on to ask if alt text would be useful when there’s text on the page describing what’s in the image.

In this case, from an SEO point of view, the text on the page would be enough for search engines.

However, it would still make sense to use alt text for people who use screen readers.

“From a more general point of view, the alt text is meant as a replacement or description of the image, and that’s something that is particularly useful for people who can’t look at individual images, who use things like screen readers, but it also helps search engines to understand what this image is about.

If you already have the same description for a product around the image, for search engines we kind of have what we need, but for people with screen readers maybe it still makes sense to have some kind of alt text for that specific image.”

Alt Text Should Be Descriptive

Mueller emphasizes the importance of using descriptive alt text.

The text should describe what’s in the image for people who aren’t able to view it.

Avoid using generic text, like repeating product names over and over.

“In a case like this I would avoid the situation where you’re just repeating the same thing over and over. So avoid having like the title of a product be used as an alt text for the image, but rather describe the image in a slightly different way. So that’s kind of the recommendation I would have there.

I wouldn’t just blindly copy and paste the same text that you already have on a page as an alt text for an image because that doesn’t really help search engines and it doesn’t really help people who rely on screen readers.”

Hear Mueller’s full response in the video below:


Featured Image: Screenshot from YouTube.com/GoogleSearchCentral, March 2022. 

from Search Engine Journal https://www.searchenginejournal.com/google-alt-text-only-a-factor-for-image-search/442865/

The Quantum Technology Ecosystem – Explained

If you think you understand quantum mechanics,
you don’t understand quantum mechanics

Richard Feynman

IBM Quantum Computer

Tens of billions of public and private capital are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses and change the balance of military power. So much so, that they have collectively invested ~$24 billion in in quantum research and applications.

At the same time, a week doesn’t go by without another story about a quantum technology milestone or another quantum company getting funded. Quantum has moved out of the lab and is now the focus of commercial companies and investors. In 2021 venture capital funds invested over $2 billion in 90+ Quantum technology companies. Over a $1 billion of it going to Quantum computing companies. In the last six months quantum computing companies IonQ, D-Wave and Rigetti went public at valuations close to a billion and half dollars. Pretty amazing for computers that won’t be any better than existing systems for at least another decade – or more.  So why the excitement about quantum?

The Quantum Market Opportunity

While most of the IPOs have been in Quantum Computing, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology.

All of three of these markets have the potential for being disruptive. In time Quantum computing could obsolete existing cryptography systems, but viable commercial applications are still speculative. Quantum communications could allow secure networking but are not a viable near-term business. Quantum sensors could create new types of medical devices, as well as new classes of military applications, but are still far from a scalable business.

It’s a pretty safe bet that 1) the largest commercial applications of quantum technologies won’t be the ones these companies currently think they’re going to be, and 2) defense applications using quantum technologies will come first. 3) if and when they do show up they’ll destroy existing businesses and create new ones.

We’ll describe each of these market segments in detail. But first a description of some quantum concepts.

Key Quantum Concepts

Skip this section if all you want to know is that 1) quantum works, 2) yes, it is magic.

Quantum  – The word “Quantum” refers to quantum mechanics which explains the behavior and properties of atomic or subatomic particles, such as electrons, neutrinos, and photons.

Superposition – quantum particles exist in many possible states at the same time. So a particle is described as a “superposition” of all those possible states. They fluctuate until observed and measured. Superposition underpins a number of potential quantum computing applications.

Entanglement – is what Einstein called “spooky action at a distance.” Two or more quantum objects can be linked so that measurement of one dictates the outcomes for the other, regardless of how far apart they are. Entanglement underpins a number of potential quantum communications applications.

Observation – Superposition and entanglement only exist as long as quantum particles are not observed or measured. If you observe the quantum state you can get information, but it results in the collapse of the quantum system.

Qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.

Quantum Computers – Background

Quantum computers are a really cool idea. They harness the unique behavior of quantum physics—such as superposition, entanglement, and quantum interference—and apply it to computing.

In a classical computer transistors can represent two states – either a 0 or 1. Instead of transistors Quantum computers use quantum bits (called qubits.) Qubits exist in superposition – both in 0 and 1 state simultaneously.

Classic computers use transistors as the physical building blocks of logic. In quantum computers they may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond. The jury is still out.

In a classic computer 2-14 transistors make up the seven basic logic gates (AND, OR, NAND, etc.) In a quantum computer building a single logical Qubit require a minimum of 9 but more likely 100’s or thousands of physical Qubits (to make up for error correction, stability, decoherence and fault tolerance.)

In a classical computer compute-power increases linearly with the number of transistors and clock speed. In a Quantum computer compute-power increases exponentially with the addition of each logical qubit.

But qubits have high error rates and need to be ultracold. In contrast classical computers have very low error rates and operate at room temperature.

Finally, classical computers are great for general purpose computing. But quantum computers can theoretically solve some complex algorithms/ problems exponentially faster than a classical computer. And with a sufficient number of logical Qubits they can become a Cryptographically Relevant Quantum Computer (CRQC).  And this is where Quantum computers become very interesting and relevant for both commercial and national security. (More below.)

Types of Quantum Computers

Quantum computers could potentially do things at speeds current computers cannot. Think of the difference of how fast you can count on your fingers versus how fast today’s computers can count. That’s the same order of magnitude speed-up a quantum computer could have over today’s computers for certain applications.

Quantum computers fall into four categories:

  1. Quantum Emulator/Simulator
  2. Quantum Annealer
  3. NISQ – Noisy Intermediate Scale Quantum
  4. Universal Quantum Computer – which can be a Cryptographically Relevant Quantum Computer (CRQC)

When you remove all the marketing hype, the only type that matters is #4 – a Universal Quantum Computer. And we’re at least a decade or more away from having those.

Quantum Emulator/Simulator
These are classical computers that you can buy today that simulate quantum algorithms. They make it easy to test and debug a quantum algorithm that someday may be able to run on a Universal Quantum Computer. Since they don’t use any quantum hardware they are no faster than standard computers.

Quantum Annealer is a special purpose quantum computer designed to only run combinatorial optimization problems, not general-purpose computing, or cryptography problems. D-Wave has defined and owned this space. While they have more physical Qubits than any other current system they are not organized as gate-based logical qubits. Currently this is a nascent commercial technology in search of a future viable market.

Noisy Intermediate-Scale Quantum (NISQ) computers. Think of these as prototypes of a Universal Quantum Computer – with several orders of magnitude fewer bits. (They currently have 50-100 qubits, limited gate depths, and short coherence times.) As they are short several orders of magnitude of Qubits, NISQ computers cannot perform any useful computation, however they are a necessary phase in the learning, especially to drive total system and software learning in parallel to the hardware development. Think of them as the training wheels for future universal quantum computers.

Universal Quantum Computers / Cryptographically Relevant Quantum Computers (CRQC)
This is the ultimate goal. If you could build a universal quantum computer with fault tolerance (i.e. millions of error corrected physical qubits resulting in thousands of logical Qubits), you could run quantum algorithms in cryptography, search and optimization, quantum systems simulations, and linear equations solvers. (See here for a list of hundreds quantum algorithms.) These all would dramatically outperform classical computation on large complex problems that grow exponentially as more variables are considered. Classical computers can’t attack these problems in reasonable times without so many approximations that the result is useless. We simply run out of time and transistors with classical computing on these problems. These special algorithms are what make quantum computers potentially valuable. For example, Grover’s algorithm solves the problem for the unstructured search of data. Further, quantum computers are very good at minimization / optimizations…think optimizing complex supply chains, energy states to form complex molecules, financial models, etc.

However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night.

It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.

The security of today’s public key cryptography systems rests on the assumption that breaking into those with a thousand or more digits is practically impossible. It requires factoring into large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Universal Quantum Computer. Uh-oh!

Impact of a Cryptographically Relevant Quantum Computer (CRQC) Skip this section if you don’t care about cryptography.

Not only would a Universal Quantum Computer running Shor’s algorithm make today’s public key algorithms (used for asymmetric key exchanges and digital signatures) useless, someone can implement a “harvest-now-and-decrypt-later” attack to record encrypted documents now with intent to decrypt them in the future. That means everything you send encrypted today will be able to be read retrospectively. Many applications – from ATMs to emails – would be vulnerable—unless we replace those algorithms with those that are “quantum-safe”.

When Will Current Cryptographic Systems Be Vulnerable?

The good news is that we’re nowhere near having any viable Cryptographically Relevant Quantum Computer, now or in the next few years. However, you can estimate when this will happen by calculating how many logical Qubits are needed to run Shor’s Algorthim and how long it will it take to break these crypto systems. There are lots of people tracking these numbers (see here and here). Their estimate is that using 8,194 logical qubits using 22.27 million physical qubits, it would take a quantum computer 20 minutes to break RSA-2048. The best estimate is that this might be possible in 8 to 20 years.

Post-Quantum / Quantum-Resistant Codes

That means if you want to protect the content you’re sending now, you need to migrate to new Post-Quantum /Quantum-Resistant Codes. But there are three things to consider in doing so:

  1. shelf-life time: the number of years the information must be protected by cyber-systems
  2. migration time: the number of years needed to properly and safely migrate the system to a quantum-safe solution
  3. threat timeline: the number of years before threat actors will be able to break the quantum-vulnerable systems

These new cryptographic systems would secure against both quantum and conventional computers and can interoperate with existing communication protocols and networks. The symmetric key algorithms of the Commercial National Security Algorithm (CNSA) Suite were selected to be secure for national security systems usage even if a CRQC is developed.

Cryptographic schemes that commercial industry believes are quantum-safe include lattice-based cryptography, hash trees, multivariate equations, and super-singular isogeny elliptic curves.

Estimates of when you can actually buy a fully error-corrected quantum computers vary from “never” to somewhere between 8 to 20 years from now. (Some optimists believe even earlier.)

Quantum Communication

Quantum communications quantum computers. A quantum network’s value comes from its ability to distribute entanglement. These communication devices manipulate the quantum properties of photons/particles of light to build Quantum Networks.

This market includes secure quantum key distribution, clock synchronization, random number generation and networking of quantum military sensors, computers, and other systems.

Quantum Cryptography/Quantum Key Distribution
Quantum Cryptography/Quantum Key Distribution can distribute keys between authorized partners connected by a quantum channel and a classical authenticated channel. It can be implemented via fiber optics or free space transmission. China transmitted entangled photons (at one pair of entangled particles per second) over 1,200 km in a satellite link, using the Micius satellite.

The Good: it can detect the presence of an eavesdropper, a feature not provided in standard cryptography. The Bad: Quantum Key Distribution can’t be implemented in software or as a service on a network and cannot be easily integrated into existing network equipment. It lacks flexibility for upgrades or security patches. Securing and validating Quantum Key Distribution is hard and it’s only one part of a cryptographic system.

The view from the National Security Agency (NSA) is that quantum-resistant (or post-quantum) cryptography is a more cost effective and easily maintained solution than quantum key distribution. NSA does not support the usage of QKD or QC to protect communications in National Security Systems. (See here.) They do not anticipate certifying or approving any Quantum Cryptography/Quantum Key Distribution security products for usage by National Security System customers unless these limitations are overcome. However, if you’re a commercial company these systems may be worth exploring.

Quantum Random Number Generators (GRGs)
Commercial Quantum Random Number Generators that use quantum effects (entanglement) to generate nondeterministic randomness are available today. (Government agencies can already make quality random numbers and don’t need these devices.)

Random number generators will remain secure even when a Cryptographically Relevant Quantum Computer is built.

Quantum Sensing and Metrology

Quantum sensors  Quantum computers.

This segment consists of Quantum Sensing (quantum magnetometers, gravimeters, …), Quantum Timing (precise time measurement and distribution), and Quantum Imaging (quantum radar, low-SNR imaging, …) Each of these areas can create entirely new commercial products or entire new industries e.g. new classes of medical devices and military systems, e.g. anti-submarine warfare, detecting stealth aircraft, finding hidden tunnels and weapons of mass destruction. Some of these are achievable in the near term.

Quantum Timing
First-generation quantum timing devices already exist as microwave atomic clocks. They are used in GPS satellites to triangulate accurate positioning. The Internet and computer networks use network time servers and the NTP protocol to receive the atomic clock time from either the GPS system or a radio transmission.

The next generation of quantum clocks are even more accurate and use laser-cooled single ions confined together in an electromagnetic ion trap. This increased accuracy is not only important for scientists attempting to measure dark matter and gravitational waves, but miniaturized/ more accurate atomic clocks will allow precision navigation in GPS- degraded/denied areas, e.g. in commercial and military aircraft, in tunnels and caves, etc.

Quantum Imaging
Quantum imaging is one of the most interesting and near-term applications. First generation magnetometers such as superconducting quantum interference devices (SQUIDs) already exist. New quantum sensor types of imaging devices use entangled light, accelerometers, magnetometers, electrometers, gravity sensors. These allow measurements of frequency, acceleration, rotation rates, electric and magnetic fields, photons, or temperature with levels of extreme sensitivity and accuracy.

These new sensors use a variety of quantum effects: electronic, magnetic, or vibrational states or spin qubits, neutral atoms, or trapped ions. Or they use quantum coherence to measure a physical quantity. Or use quantum entanglement to improve the sensitivity or precision of a measurement, beyond what is possible classically.

Quantum Imaging applications can have immediate uses in archeology,  and profound military applications. For example, submarine detection using quantum magnetometers or satellite gravimeters could make the ocean transparent. It would compromise the survivability of sea-based nuclear deterrent by detecting and tracking subs deep underwater.

Quantum sensors and quantum radar from companies like Rydberg can be game changers.

Gravimeters or quantum magnetometers could also detect concealed tunnels, bunkers, and nuclear materials. Magnetic resonance imaging could remotely ID chemical and biological agents. Quantum radar or LIDAR would enable extreme detection of electromagnetic emissions, enhancing ELINT and electronic warfare capabilities. It can use fewer emissions to get the same detection result, for better detection accuracy at the same power levels – even detecting stealth aircraft.

Finally, Ghost imaging uses the quantum properties of light to detect distant objects using very weak illumination beams that are difficult for the imaged target to detect. It can increase the accuracy and lessen the amount of radiation exposed to a patient during x-rays. It can see through smoke and clouds. Quantum illumination is similar to ghost imaging but could provide an even greater sensitivity.

National and Commercial Efforts
Countries across the world are making major investments ~$24 billion in 2021 – in quantum research and applications.

Lessons Learned

  • Quantum technologies are emerging and disruptive to companies and defense
  • Quantum technologies cover Quantum Computing, Quantum Communications and Quantum Sensing and Metrology
    • Quantum computing could obsolete existing cryptography systems
    • Quantum communication could allow secure cryptography key distribution and networking of quantum sensors and computers
    • Quantum sensors could make the ocean transparent for Anti-submarine warfare, create unjammable A2/AD, detect stealth aircraft, find hidden tunnels and weapons of mass destruction, etc.
  • A few of these technologies are available now, some in the next 5 years and a few are a decade or more out
  • Tens of billions of public and private capital dollars are being invested in them
  • Defense applications will come first
  • The largest commercial applications won’t be the ones we currently think they’re going to be
    • when they do show up they’ll destroy existing businesses and create new ones

from Steve Blank https://steveblank.com/2022/03/22/the-quantum-technology-ecosystem-explained/