Archive

Archive for January, 2024

The Art of Engineering AI Prompts

January 31st, 2024 No comments

In the rapidly evolving world of artificial intelligence, the ability to communicate effectively with AI tools has become an indispensable skill. Whether you’re generating content, solving complex data problems, or creating stunning digital art, the quality of the outcomes you receive is directly influenced by the prompts you provide.

Categories: Designing, Others Tags:

Top CRM Features and Benefits for Small Businesses

January 30th, 2024 No comments

Customer relationship management (CRM) systems offer businesses a powerful tool for creating lasting customer relationships in today’s highly competitive environment, helping to manage contacts efficiently while streamlining operations and driving growth. By 2025, CRM industry revenue is expected to exceed the $80 billion mark, reflecting its growing ubiquity as an integral element in modern business strategies.

In this blog post, we’ll examine the top CRM features tailored for small businesses, while discussing their transformative benefits. From contact management to sales automation and insightful analytics, CRM systems are more than simple technological solutions. 

A quality CRM system is a strategic asset that increases efficiency, enhances customer satisfaction, and allows sustainable expansion, which are invaluable tools for small businesses looking to thrive in today’s increasingly competitive business world. Join us as we uncover how these tools can act as catalysts in propelling success forward.

Contact Management

Contact management serves as the core of any quality CRM system, actively centralizing and organizing a plethora of contacts for small businesses. This includes customers, leads, suppliers, and partners. Integrating a CRM system enables businesses to effectively organize and manage this information.

Active management ensures quick and easy access to all relevant details about each contact, providing a holistic view that enables personalized communication and strategic decision-making. The streamlined contact management process in a CRM system optimizes efficiency and enhances overall business operations.

Lead and Opportunity Management

In the realm of small businesses, each lead holds the promise of potential growth. CRM systems play an essential role in actively tracking and managing leads throughout the sales pipeline. By prioritizing leads efficiently, businesses can channel their efforts toward high-value opportunities with higher conversion rates that may generate substantial revenues.

The comprehensive lead and opportunity management facilitated by CRM systems ensures a dynamic approach to business development, contributing significantly to sustained success and expansion.

Sales Automation

CRMs allow small businesses to automate repetitive tasks, including data entry, follow-up emails, and appointment scheduling. Sales automation enables these features to be used more efficiently, by optimizing time spent by sales teams while simultaneously decreasing manual errors and keeping consistent communication channels with prospects and customers. Not only does this increase efficiency but it also frees up valuable sales representatives’ time so they can focus on cultivating relationships and closing deals successfully. Coupled with small business marketing automation tools, CRMs can become one of the best assets in a team’s sales and marketing toolbox.

Customer Communication Tracking

Understanding customer interactions is critical to providing exceptional service. CRM systems enable small businesses to easily consolidate all communication channels such as emails, calls, and meetings in one central place for easy tracking of interactions. This provides teams with a complete picture that allows them to respond promptly to inquiries, anticipate needs, and create tailored experiences that create customer loyalty among their client base.

Task and Calendar Management

Pexels

Tracking tasks and deadlines is essential to any business, especially small enterprises that have limited resources. CRM systems offer task and calendar management features that assist teams in staying organized and on top of activities. This helps them stay on schedule with key activities and ensure nothing falls through the cracks while taking a proactive approach to managing operations.

Integration Capabilities

Small businesses rely on various tools to manage different aspects of their operations. A CRM system with strong integration capabilities enables seamless communication with essential applications like email, marketing automation, and accounting software. This helps small businesses eliminate silos while improving collaboration. It also guarantees that data flows smoothly throughout their organizations.

Mobile Accessibility

At a time when remote working and on-the-go business activities are the norm, having a CRM system with mobile accessibility is a necessity for small business teams. This feature enables accessing important information, updating records, and communicating with clients whenever, wherever they may be located, as well as increasing flexibility so no opportunities are missed due to geographical restraints.

Benefits for Small Businesses

Enhanced Efficiency

CRM systems play a pivotal role in substantially enhancing the efficiency of small businesses. Through task automation, streamlined communications, and centralized information management, these systems empower teams to navigate core activities more seamlessly. 

This results in heightened productivity and a notable reduction in time wastage, allowing small businesses to allocate resources strategically and thrive in a competitive landscape. The comprehensive capabilities of CRM systems actively contribute to operational excellence, facilitating sustained growth and success.

Improved Customer Relationships

Small businesses utilize CRM systems to actively cultivate strong connections with their customers. These systems facilitate personalized communication, timely responses, and a comprehensive overview of interactions, aiding businesses in fostering long-term customer loyalty and elevating overall customer satisfaction.

Increased Sales and Revenue

Effective lead and opportunity management combined with sales automation is the cornerstone of higher conversion rates and revenue growth for small businesses. CRM systems enable these organizations to prioritize the most promising opportunities and pursue them aggressively for increased profit potential in their sales pipeline.

Scalability

As small businesses expand, so do their needs. A scalable CRM system can adapt to meet these evolving demands of a growing enterprise, making sure that its tools continue to support and enhance operations.

Final Thoughts

Small businesses seeking success in an increasingly competitive marketplace should see CRM as a strategic imperative. Small businesses can increase efficiency, customer satisfaction, and business expansion by harnessing its features and benefits. Through its capabilities, they can position themselves for long-term success in an ever-evolving landscape.

Featured Image by Canva Studio

The post Top CRM Features and Benefits for Small Businesses appeared first on noupe.

Categories: Others Tags:

The Feature Trap: Why Feature Centricity Is Harming Your Product

January 29th, 2024 No comments

Most product teams think in terms of features. Features are easy to brainstorm and write requirement docs for, and they fit nicely into our backlogs and ticketing systems. In short, thinking in terms of features makes it easy to manage the complex task of product delivery.

However, we know that the best products are more than the sum of their parts, and sometimes, the space between the features is as important as the features themselves. So, what can we do to improve the process?

The vast majority of product teams are organized around delivering features — new pieces of functionality that extend the capabilities of the product. These features will often arise from conversations the company is having with prospective buyers:

  • “What features are important to you?”
  • “What features are missing from your current solution?”
  • “What features would we need to add in order to make you consider switching from your existing provider to us?” and so on.

The company will then compile a list of the most popular feature requests and will ask the product team to deliver them.

For most companies, this is what customer centricity looks like; asking customers to tell them what they want — and then building those features into the product in the hope they’ll buy — becomes of key importance. This is based on the fundamental belief that people buy products primarily for the features so we assemble our roadmaps accordingly.

We see this sort of thinking with physical products all the time. For instance, take a look at the following Amazon listing for one of the top-rated TV sets from last year. It’s like they hurled up the entire product roadmap directly onto the listing!

Now, of course, if you’re a hardcore gamer with very specific requirements, you might absolutely be looking for a TV with “VRR, ALLM, and eARC as specified in HDMI2.1, plus G-Sync, FreeSync, Game Optimizer, and HGiG.” But for me? I don’t have a clue what any of those things mean, and I don’t really care. Instead, I’ll go to a review site where they explain what the product actually feels like to use in everyday life. The reviewers will explain how good the unboxing experience is. How sturdy the build is. How easy it is to set up. They’ll explain that the OS is really well put together and easy to navigate, the picture quality is probably the best on the market, and the sound, while benefiting from the addition of a quality sound bar, is very clear and understandable. In short, they’ll be describing the user experience.

The ironic thing is that when I talk to most founders, product managers, and engineers about how they choose a TV, they’ll say exactly the same thing. And yet, for some reason, we struggle to take that personal experience and apply it to our own users!

Tip: As a fun little trick, next time you find yourself arguing about features over experience, ask people to get out their phones. I bet that the vast majority of folks in the room will have an iPhone, despite Samsung and Google phones generally having better cameras, more storage, better screens, and so on. The reason why iPhones have risen in dominance (if we ignore the obvious platform lock-in) is because, despite perhaps not having the best feature set on the market, they feel so nice to use.

Seeing Things From The Users’ Perspective

While feature-centric thinking is completely understandable, it misses a whole class of problems. The features in and of themselves might look good on paper and work great in practice, but do they mesh together to form a convincing whole? Or is the full experience a bit of a mess?

All the annoying bumps, barriers, and inconsistencies that start accruing around each new feature, if left unsolved, can limit the amount of value users can extract from the product. And if you don’t effectively identify and remove these barriers in a deliberate and structured way, any additional functionality will simply add to the problem.

If users are already struggling to extract value from existing features, how do you expect them to extract any additional value you might be adding to the product?

“As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use.”
— “Are Too Many Features Hurting Your Product?

These barriers and inconsistencies are usually the result of people not thinking through the user experience. And I don’t mean user experience in some abstract way. I mean literally walking through the product step-by-step as though you’d never seen it before — sometimes described as having a “beginner’s mind” mdash; and considering the following questions:

  • Is it clear what value this product delivers and how I can get that value?
  • If I were a new user, would the way the product is named and structured make sense to me?
  • Can I easily build up a mental model of where everything is and how the product works?
  • Do I know what to do next?
  • How is this going to fit into my existing workflow?
  • What’s getting in my way and slowing me down?

While approaching things with a beginner’s mind sounds easy, it’s actually a surprisingly hard mindset for people to adopt — letting go of everything they know (or think they know) about their product, market, and users. Instead, their position as a superuser tends to cloud their judgment: believing that because something is obvious to them (something that they have created and have been working on for the past two years), it will be obvious to a new user who has spent less than five minutes with the product. This is where usability testing (a UX research method that evaluates whether users are able to use a digital product efficiently and effectively) should normally “enter the stage.”

The issue with trying to approach things with a beginner’s mind is also often exacerbated by “motivated reasoning,” the idea that we view things through the lens of what we want to be true, rather than what is true. To this end, you’re much more likely to discount feedback from other people if that feedback is going to result in some negative outcome, like having to spend extra time and money redesigning a user flow when you’d rather be shipping that cool new feature you came up with last week.

I see this play out in usability testing sessions all the time. The first subject comes in and struggles to grasp a core concept, and the team rolls their eyes at the incompetence of the user. The next person comes in and has the same experience, causing the team to ask where you found all these stupid users. However, as the third, fourth, and fifth person comes through and experiences the same challenge, “lightbulbs” slowly start forming over the team members’ heads:

“Maybe this isn’t the users’ fault after all? Maybe we’ve assumed a level of knowledge or motivation that isn’t there; maybe it’s the language we’ve used to describe the feature, or maybe there’s something in the way the interface has been designed that is causing this confusion?”

These kinds of insights can cause teams to fundamentally pivot their thinking. But this can also create a huge amount of discomfort and cognitive dissonance — realizing that your view of the world might not be entirely accurate. As such, there’s a strong motivation for people to avoid these sorts of realizations, which is why we often put so little effort (unfortunately) into understanding how our users perceive and use the things we create.

Developing a beginner’s mind takes time and practice. It’s something that most people can cultivate, and it’s actually something I find designers are especially good at — stepping into other people’s shoes, unclouded by their own beliefs and biases. This is what designers mean when they talk about using empathy.

Towards A Two-Tier Process (Conclusion)

We obviously still need to have “feature teams.” Folks who can understand and deliver the new capabilities our users request (and our business partners demand). While I’d like to see more thought and validation when it comes to feature selection and creation, it’s often quicker to add new features to see if they get used than to try and use research to give a definitive answer.

As an example, I’m working with one founder at the moment who has been going around houses with their product team for months about whether a feature would work. He eventually convinced them to give it a try — it took four days to push out the change, and they got the feedback they needed almost instantly.

However, as well as having teams focused on delivering new user value, we also need teams who are focused on helping unlock and maximize existing user value. These teams need to concentrate on outcomes over outputs; so, less deliver X capability in Y sprints than deliver X improvement by Y date. To do this, these teams need to have a high level of agency. This means taking them out of the typical feature factory mindset.

The teams focusing on helping unlock and maximize existing user value need to be a little more cross-disciplinary than your traditional feature team. They’re essentially developing interventions rather than new capabilities — coming up with a hypothesis and running experiments rather than adding bells and whistles. “How can we improve the onboarding experience to increase activation and reduce churn?” Or, “How can we improve messaging throughout the product so people have a better understanding of how it works and increase our North Star metric as a result?”

There’s nothing radical about focusing on outcomes over outputs. In fact, this way of thinking is at the heart of both the Lean Startup movement and the Product Led Growth. The problem is that while this is seen as received wisdom, very few companies actually put it into practice (although if you ask them, most founders believe that this is exactly what they do).

Put simply, you can’t expect teams to work independently to deliver “outcomes” if you fill their their calendar with output work.

So this two-tier system is really a hack, allowing you to keep sales, marketing, and your CEO (and your CEO’s partner) happy by delivering a constant stream of new features while spinning up a separate team who can remove themselves from the drum-beat of feature delivery and focus on the outcomes instead.

Further Reading

  • Why Too Many Features Can Ruin a Digital Product Before It Begins(Komodo Digital)
    Digital products are living, ever-evolving things. So, why do so many companies force feature after feature into projects without any real justification? Let’s talk about feature addiction and how to avoid it.
  • Are Too Many Features Hurting Your Product?(FAQPrime)
    As a product manager, it’s natural to want to offer as many features as possible to your customers. After all, you want to provide value, right? But what happens when you offer too many features? Your product becomes bloated, convoluted, and difficult to use. Let’s take a closer look at what feature bloat is, why it’s a problem, and how you can avoid it.
  • Twelve Signs You’re Working in a Feature Factory,” John Cutler
    The author started using the term Feature Factory when a software developer friend complained that he was “just sitting in the factory, cranking out features, and sending them down the line.” This article was written in 2016 and still holds its ground today. In 2019 there appeared a newer version of it (“Twelve signs You’re Working in a Feature Factory — Three Years Later”).
  • What Is The Agile Methodology?,” (Atlassian)
    The Agile methodology is a project management approach that involves breaking the project into phases and emphasizes continuous collaboration and improvement. Teams follow a cycle of planning, executing, and evaluating.
  • Problem Statement vs Hypothesis — Which ­­Is More Important?,” Sadie Neve
    When it comes to experimentation and conversion rate optimization (CRO), we often see people relying too much on their instincts. But in reality, nothing in experimentation is certain until tested. This means experimentation should be approached like a scientific experiment that follows three core steps: identify a problem, form a hypothesis, and test that hypothesis.
  • The Build Trap,” Melissa Perri (Produx Labs)
    The “move fast and break things” mantra seems to have taken the startup world by storm since Facebook made it their motto a few years ago. But there is a serious flaw with this phrase, and it’s that most companies see this as an excuse to stop analyzing what they intend to build and why they should build it — those companies get stuck in what I call “The Build Trap.”
  • What Is Product-led Growth?(PLG Collective)
    We are in the middle of a massive shift in the way people use and buy software. It’s been well over a decade since Salesforce brought software to the cloud. Apple put digital experiences in people’s pockets back in 2009 with the first iPhone. And in the years since the market has been flooded with consumer and B2B products that promise to meet just about every need under the sun.
  • The Lean Startup
    The Lean Startup isn’t just about how to create a more successful entrepreneurial business. It’s about what we can learn from those businesses to improve virtually everything we do.
  • Usability Testing — The Complete Guide,” Daria Krasovskaya and Marek Strba
    Usability testing is the ultimate method of uncovering any type of issue related to a system’s ease of use, and it truly is a must for any modern website or app owner.
  • The Value of Great UX,” Jared Spool
    How can we show that a great user experience produces immense value for the organization? We can think of experience as a spectrum, from extreme frustration to delight. In his article, Jared will walk you through how our work as designers is able to transform our users’ experiences from being frustrated to being delighted.
  • Improving The Double Diamond Design Process,” Andy Budd (Smashing Magazine)
    The so-called “Double Diamond” is a great way of visualizing an ideal design process, but it’s just not the way most companies deliver new projects or services. The article proposes a new “Double Diamond” idea that better aligns with the way work actually gets done and highlights the place where design has the most leverage.
  • Are We Moving Towards a Post-Agile Age?,” Andy Budd
    Agile has been the dominant development methodology in our industry for a while now. While some teams are just getting to grips with Agile, others have extended it to the point that it’s no longer recognizable as Agile; in fact, many of the most progressive design and development teams are Agile only in name. What they are actually practicing is something new, different, and innately more interesting — something I’ve been calling Post-Agile thinking.
Categories: Others Tags:

15 Best New Fonts, January 2024

January 29th, 2024 No comments

It’s January, and what better time to revisit our typography and freshen up our designs for 2024.

Categories: Designing, Others Tags:

A Simple Guide To Retrieval Augmented Generation Language Models

January 26th, 2024 No comments

Suppose you ask some AI-based chat app a reasonably simple, straightforward question. Let’s say that app is ChatGPT, and the question you ask is right in its wheelhouse, like, “What is Langchain?” That’s really a softball question, isn’t it? ChatGPT is powered by the same sort of underlying technology, so it ought to ace this answer.

So, you type and eagerly watch the app spit out conversational strings of characters in real-time. But the answer is less than satisfying.

In fact, ask ChatGPT — or any other app powered by language models — any question about anything recent, and you’re bound to get some sort of response along the lines of, “As of my last knowledge update…” It’s like ChatGPT fell asleep Rumplestiltskin-style back in January 2022 and still hasn’t woken up. You know how people say, “You’d have to be living under a rock not to know that”? Well, ChatGPT took up residence beneath a giant chunk of granite two years ago.

While many language models are trained on massive datasets, data is still data, and data becomes stale. You might think of it like Googling “CSS animation,” and the top result is a Smashing Magazine article from 2011. It might still be relevant, but it also might not. The only difference is that we can skim right past those instances in search results while ChatGPT gives us some meandering, unconfident answers we’re stuck with.

There’s also the fact that language models are only as “smart” as the data used to train them. There are many techniques to improve language model’s performance, but what if language models could access real-world facts and data outside their training sets without extensive retraining? In other words, what if we could supplement the model’s existing training with accurate, timely data?

This is exactly what Retrieval Augmented Generation (RAG) does, and the concept is straightforward: let language models fetch relevant knowledge. This could include recent news, research, new statistics, or any new data, really. With RAG, a large language model (LLM) is able to retrieve “fresh” information for more high-quality responses and fewer hallucinations.

But what exactly does RAG make available, and where does it fit in a language chain? We’re going to learn about that and more in this article.

Understanding Semantic Search

Unlike keyword search, which relies on exact word-for-word matching, semantic search interprets a query’s “true meaning” and intent — it goes beyond merely matching keywords to produce more results that bear a relationship to the original query.

For example, a semantic search querying “best budget laptops” would understand that the user is looking for “affordable” laptops without querying for that exact term. The search recognizes the contextual relationships between words.

This works because of text embeddings or mathematical representations of meaning that capture nuances. It’s an interesting process of feeding a query through an embedded model that, in turn, converts the query into a set of numeric vectors that can be used for matching and making associations.

The vectors represent meanings, and there are benefits that come with it, allowing semantic search to perform a number of useful functions, like scrubbing irrelevant words from a query, indexing information for efficiency, and ranking results based on a variety of factors such as relevance.

Special databases optimized for speed and scale are a strict necessity when working with language models because you could be searching through billions of documents. With a semantic search implementation that includes test embedding, storing and querying high-dimensional embedding data is much more efficient, producing quick and efficient evaluations on queries against document vectors across large datasets.

That’s the context we need to start discussing and digging into RAG.

Retrieval Augmented Generation

Retrieval Augmented Generation (RAG) is based on research produced by the Meta team to advance the natural language processing capabilities of large language models. Meta’s research proposed combining retriever and generator components to make language models more intelligent and accurate for generating text in a human voice and tone, which is also commonly referred to as natural language processing (NLP).

At its core, RAG seamlessly integrates retrieval-based models that fetch external information and generative model skills in producing natural language. RAG models outperform standard language models on knowledge-intensive tasks like answering questions by augmenting them with retrieved information; this also enables more well-informed responses.

You may notice in the figure above that there are two core RAG components: a retriever and a generator. Let’s zoom in and look at how each one contributes to a RAG architecture.

Retriever

We already covered it briefly, but a retriever module is responsible for finding the most relevant information from a dataset in response to queries and makes that possible with the vectors produced by text embedding. In short, it receives the query and retrieves what it evaluates to be the most accurate information based on a store of semantic search vectors.

Retrievers are models in and of themselves. But unlike language models, retrievers are not in the business of “training” or machine learning. They are more of an enhancement or an add-on that provides additional context for understanding and features for fetching that information efficiently.

That means there are available options out there for different retrievers. You may not be surprised that OpenAI offers one, given their ubiquity. There’s another one provided by Cohere as well as a slew of smaller options you can find in the Hugging Face community.

Generator

After the retriever finds relevant information, it needs to be passed back to the application and displayed to the user. Or what’s needed is a generator capable of converting the retrieved data into human-readable content.

What’s happening behind the scenes is the generator accepts the embeddings it receives from the retriever, mashes them together with the original query, and passes through the trained language model for an NLP pass on the way to becoming generated text.

The entire tail end of that process involving the language model and NLP is a process in its own right and is something I have explained in greater detail in another Smashing Magazine article if you are curious about what happens between the generator and final text output.

RAG Full View

Pulling everything together, a complete RAG flow goes like this:

  1. A query is made.
  2. The query is passed to the RAG model.
  3. The RAG model encodes the query into text embeddings that are compared to a dataset of information.
  4. The RAG’s retriever decides the most relevant information with its semantic search abilities and converts it into vector embeddings.
  5. The RAG’s retriever sends the parsed embeddings to the generator.
  6. The generator accepts the embeddings and combines them with the original query.
  7. The generator passes its work off to the language model to produce natural-sounding content presented to the user.

LLM Hallucinations And Knowledge Limitations

We opened this article up by describing “hallucinations” in LLMs’ incorrect responses or something along the lines of “I don’t know, but here’s what I do know.” The LLM will “make stuff up” because it simply doesn’t have updated information to respond with.

Let’s revisit the first query we used to kick off this article — “What is LangChain?” — and compare responses from the Vicuna and GPT-4 language models:

Here’s the transcription for the second query using OpenAI’s GPT-4 for posterity:

“LangChain is a blockchain-based decentralized translation platform. It’s designed to connect translators and those seeking translation services directly without the need for intermediaries. This system uses artificial intelligence and blockchain technologies to provide high-quality, efficient translation services. LangChain also has a token-based economy, where users can earn and spend tokens within the LangChain ecosystem.”

Both Vicuna and GPT-4 refer to LangChain as a blockchain platform. Blockchain is a technology that stores data in a decentralized manner using chained blocks, so the models’ responses sound plausible given the “chain” in the name. However, LangChain is not actually a blockchain-based technology.

This is a prime example demonstrating how LLMs will fabricate responses that may seem believable at first glance but are incorrect. LLMs are designed to predict the next “plausible” tokens in a sequence, whether those are words, subwords, or characters. They don’t inherently understand the full meaning of the text. Even the most advanced models struggle to avoid made-up responses, especially for niche topics they lack knowledge about.

Let’s look at another example by querying: “What is the most preferred framework used by developers for building applications leveraging large language models?”

While Vicuna offers a couple of reasonable starting points for answering the question, the frameworks it refers to have limitations for efficiency and scalability in production-level applications that use LLMs. That could quite possibly send a developer down a bad path. And as bad as that is, look at the GPT-4 response that changes topics completely by focusing on LLVM, which has nothing to do with LLMs.

What if we refine the question, but this time querying different language models? This time, we’re asking: “What is the go-to framework developed for developers to seamlessly integrate large language models into their applications, focusing on ease of use and enhanced capabilities?”

Honestly, I was expecting the responses to refer to some current framework, like LangChain. However, the GPT-4 Turbo model suggests the “Hugging Face” transformer library, which I believe is a great place to experiment with AI development but is not a framework. If anything, it’s a place where you could conceivably find tiny frameworks to play with.

Meanwhile, the GPT-3.5 Turbo model produces a much more confusing response, talking about OpenAI Codex as a framework, then as a language model. Which one is it?

We could continue producing examples of LLM hallucinations and inaccurate responses and have fun with the results all day. We could also spend a lot of time identifying and diagnosing what causes hallucinations. But we’re here to talk about RAG and how to use it to prevent hallucinations from happening in the first place. The Master of Code Global blog has an excellent primer on the causes and types of LLM hallucinations with lots of useful context if you are interested in diving deeper into the diagnoses.

Integrating RAG With Language Models

OK, so we know that LLMs sometimes “hallucinate” answers. We know that hallucinations are often the result of outdated information. We also know that there is this thing called Retrieval Augmented Generation that supplements LLMs with updated information.

But how do we connect RAG and LLMs together?

Now that you have a good understanding of RAG and its benefits, we can dive into how to implement it yourself. This section will provide hands-on examples to show you how to code RAG systems and feed new data into your LLM.

But before jumping right into the code, you’ll need to get a few key things set up:

  • Hugging Face
    We’ll use this library in two ways. First, to choose an embedding model from the model hub that we can use to encode our texts, and second, to get an access token so we can download the Llama-2 model. Sign up for a free Hugging Face in preparation for the work we’ll cover in this article.
  • Llama-2
    Meta’s powerful LLM will be our generator model. Request access via Meta’s website so we can integrate Llama-2 into our RAG implementation.
  • LlamaIndex
    We’ll use this framework to load our data and feed it into Llama-2.
  • Chroma
    We’ll use this embedding database for fast vector similarity search and retrieval. This is actually where we can store our index.

With the key tools in place, we can walk through examples for each phase: ingesting data, encoding text, indexing vectors, and so on.

Install The Libraries

We need to install the RAG libraries we identified, which we can do by running the following commands in a new project folder:

# Install essential libraries for our project
!pip install llama-index transformers accelerate bitsandbytes --quiet
!pip install chromadb sentence-transformers pydantic==1.10.11 --quiet

Next, we need to import specific modules from those libraries. There are quite a few that we want, like ChromaVectorStore and HuggingFaceEmbedding for vector indexing and embeddings capabilities, storageContext and chromadb to provide database and storage functionalities, and even more for computations, displaying outputs, loading language models, and so on. This can go in a file named app.py at the root level of your project.

## app.py

## Import necessary libraries
from llama_index import VectorStoreIndex, download_loader, ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index.response.notebook_utils import display_response
import torch
from transformers import BitsAndBytesConfig
from llama_index.prompts import PromptTemplate
from llama_index.llms import HuggingFaceLLM
from IPython.display import Markdown, display
import chromadb
from pathlib import Path
import logging
import sys

Provide Additional Context To The Model

The data we will leverage for our language model is a research paper titled “Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation” (PDF) that covers an advanced retrieval augmentation generation approach to improve problem-solving performance.

We will use the download_loader() module we imported earlier from llama_index to download the PDF file:

PDFReader = download_loader("PDFReader")
loader = PDFReader()
documents = loader.load_data(file=Path('/content/ARM-RAG.pdf'))

Even though this demonstration uses a PDF file as a data source for the model, that is just one way to supply the model with data. For example, there is Arxiv Papers Loader as well as other loaders available in the LlamaIndex Hub. But for this tutorial, we’ll stick with loading from a PDF. That said, I encourage you to try other ingestion methods for practice!

Now, we need to download Llama-2, our open-source text generation model from Meta. If you haven’t already, please set up an account with Meta and have your access token available with read permissions, as this will allow us to download Llama-2 from Hugging Face.

# huggingface api token for downloading llama2
hf_token = "YOUR Access Token"

To fit Llama-2 into constrained memory, like in Google Colab, we’ll configure 4-bit quantization to load the model at a lower precision.

quantization_config = BitsAndBytesConfig(
  load_in_4bit=True,
  bnb_4bit_compute_dtype=torch.float16,
  bnb_4bit_quant_type="nf4",
  bnb_4bit_use_double_quant=True,
)

Google Colab is where I typically do most of my language model experiments. We’re shrinking the language model down with that last snippet so it’s not too large for Colab to support.

Next, we need to initialize HuggingFaceLLM to set up our model for generating text on the fly:

llm = HuggingFaceLLM(
    model_name="meta-llama/Llama-2-7b-chat-hf", 
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    query_wrapper_prompt=PromptTemplate("<s> [INST] {query_str} [/INST] "),
    context_window=3900,
    model_kwargs={"token": hf_token, "quantization_config": quantization_config},
    tokenizer_kwargs={"token": hf_token},
    device_map="auto",
)

Note that what’s happening here is that we’ve defined our model (Llama-2), but a specific subset of it is designed for a chat UI (Llama-2-Chat). Even though this is what we are using for this article, you can also pick from other text generation models in the Hugging Face hub, such as Neural-chat from Intel.

Compare Outputs

OK, so we have the model loaded and configured. Let’s try it out by asking it questions that it could reasonably answer using the content we provided via the ARM-RAG PDF file to see how the model responds without any context. The following code is for displaying the text output without the PDF providing data to the model for comparison sake:

from IPython.display import HTML, display

# Assuming resp contains the response
resp = llm.complete("What is ARM-RAG?")

# Using HTML with inline CSS for styling (gray color, smaller font size)
html_text = f'<p style="color: #1f77b4; font-size: 14px;"><b>{resp}</b></p>'

Let’s see the response:

display(HTML(html_text))

ARM-RAG (ARM Register-Allocating Generator) is a register allocation algorithm used in compiler design. nobody knows who invented it, but it was first described in a 1995 paper by Mark H. T. Kim and Andrew Y. Chen. The ARM-RAG algorithm is a register-allocating generator, meaning it is a heuristic algorithm that takes a program as input and generates a register allocation plan for it. The goal of the algorithm is to assign registers to variables in a way that minimizes the number of spills (i.e., loads or stores to/from memory) while satisfying all the memory access constraints of the program. The ARM-RAG algorithm works by first partitioning the program into a set of basic blocks, and then iteratively allocating registers to variables within each basic block. The algorithm uses a cost function to evaluate the quality of each allocation, and it selects the allocation with the lowest cost. The cost function takes into account factors such as the distance between the variable and the current register, the distance between the variable and the next allocation, and the number of spills that would be required to allocate the variable. ARM-RAG

Obviously, without any context from the PDF file, the model generates a hallucinated response that describes some unrelated algorithm. To fix this, we need to provide relevant passages from the paper to ground Llama-2’s responses. We’ll encode the document into embeddings and index them for retrieval; then, when we query, we can feed LLama-2 relevant retrieved passages to steer it toward accurate, on-topic responses based on the contents of the PDF file.

First, we need to create a client to interact with our ChromaDB database and a new collection that will hold our vector index.

# create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("firstcollection")

Then we need to set up the HuggingFaceEmbedding class with the specified model name for embedding the text into vectors:

# Load the embedding model
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

This initializes HuggingFaceEmbedding, passing the name of the pre-trained model we want to use, BAAI/bge-base-en-v1.5. There are other options, of course.

Now, we can set up the vector store and use it to index the embedded document vectors:

# set up ChromaVectorStore and load in data
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(
  documents, storage_context=storage_context, service_context=service_context
)

This creates a ChromaVectorStore connected to our collection, defines the storage and service contexts, and generates a VectorStoreIndex from the loaded documents using the embedding model. The index is what allows us to quickly find relevant passages for a given query to augment the quality of the model’s response.

We should also establish a way for the model to summarize the data rather than spitting everything out at once. A SummaryIndex offers efficient summarization and retrieval of information:

summary_index = SummaryIndex.from_documents(documents, service_context=service_context)

Earlier, the model hallucinated when we queried it without the added context from the PDF file. Now, let’s ask the same question, this time querying our indexed data:

#Define your query
query="what is ARM-RAG?"

from llama_index.embeddings.base import similarity
query_engine =index.as_query_engine(response_mode="compact")
response = query_engine.query(query)
from IPython.display import HTML, display

# Using HTML with inline CSS for styling (blue color)
html_text = f'<p style="color: #1f77b4; font-size: 14px;"><b>{response}</b></p>'
display(HTML(html_text))

Here’s the output:

Final Response: Based on the context information provided, ARM-RAG is a system that utilizes Neural Information Retrieval to archive reasoning chains derived from solving grade-school math problems. It is an Auxiliary Rationale Memory for Retrieval Augmented Generation, which aims to enhance the problem-solving capabilities of Large Language Models (LLMs). The system surpasses the performance of a baseline system that relies solely on LLMs, demonstrating the potential of ARM-RAG to improve problem-solving capabilities.

Correct! This response is way better than the one we saw earlier — no hallucinations here.

Since we’re using the chat subset of the Llama-2 model, we could have a back-and-forth conversation with the model about the content of the PDF file with follow-up questions. That’s because the indexed data supports NLP.

chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
response = chat_engine.chat("give me real world examples of apps/system i can build leveraging ARM-RAG?")
print(response)

This is the resulting output:

Querying with: What are some real-world examples of apps or systems that can be built leveraging the ARM-RAG framework, which was discussed in our previous conversation?
Based on the context information provided, the ARM-RAG framework can be applied to various real-world examples, including but not limited to:

1. Education: ARM-RAG can be used to develop educational apps that can help students learn and understand complex concepts by generating explanations and examples that can aid in their understanding.

2. Tutoring: ARM-RAG can be applied to tutoring systems that can provide personalized explanations and examples to students, helping them grasp difficult concepts more quickly and effectively.

3. Customer Service: ARM-RAG can be utilized in chatbots or virtual assistants to provide customers with detailed explanations and examples of products or services, enabling them to make informed decisions.

4. Research: ARM-RAG can be used in research environments to generate explanations and examples of complex scientific concepts, enabling researchers to communicate their findings more effectively to a broader audience.

5. Content Creation: ARM-RAG can be applied to content creation systems that can generate explanations and examples of complex topics, such as news articles, blog posts, or social media content, making them more engaging and easier

Try asking more questions! Now that the model has additional context to augment its existing dataset, we can have a more productive — and natural — interaction.

Additional RAG Tooling Options

The whole point of this article is to explain the concept of RAG and demonstrate how it can be used to enhance a language model with accurate and updated data.

Chroma and LlamaIndex were the main components of the demonstrated RAG approach, but there are other tools for integrating RAG with language models. I’ve prepared a table that outlines some popular options you might consider trying with your own experiments and projects.

RAG Type of System Capabilities Integrations Documentation / Repo
Weaviate Vector Database Vector & Generative search LlamaIndex, LangChain, Hugging Face, Cohere, OpenAI, etc.
Pinecone Vector Database Vector search, NER-Powered search, Long-term memory OpenAI, LangChain, Cohere, Databricks
txtai Embeddings Database Semantic graph & search, Conversational search Hugging face models
Qdrant Vector Database Similarity image search, Semantic search, Recommendations LangChain, LlamaIndex, DocArray, Haystack, txtai, FiftyOne, Cohere, Jina Embeddings, OpenAI
Haystack Framework QA, Table QA, Document search, Evaluation Elasticsearch, Pinecone, Qdrant, Weaviate, vLLM, Cohere
Ragchain Framework Reranking, OCR loaders Hugging Face, OpenAI, Chroma, Pinecone
metal Vector Database Clustering, Semantic search, QA LangChain, LlamaIndex

Conclusion

In this article, we examined examples of language models producing “hallucinated” responses to queries as well as possible causes of those hallucinations. At the end of the day, a language model’s responses are only as good as the data it provided, and as we’ve seen, even the most widely used models consist of outdated information. And rather than admit defeat, the language model spits out confident guesses that could be misconstrued as accurate information.

Retrieval Augmented Generation is one possible cure for hallucinations.

By embedding text vectors pulled from additional sources of data, a language model’s existing dataset is augmented with not only new information but the ability to query it more effectively with a semantic search that helps the model more broadly interpret the meaning of a query.

We did this by registering a PDF file with the model that contains content the model could use when it receives a query on a particular subject, in this case, “Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation.”

This, of course, was a rather simple and contrived example. I wanted to focus on the concept of RAG more than its capabilities and stuck with a single source of new context around a single, specific subject so that we could easily compare the model’s responses before and after implementing RAG.

That said, there are some good next steps you could take to level up your understanding:

  • Consider using high-quality data and embedding models for better RAG performance.
  • Evaluate the model you use by checking Vectara’s hallucination leaderboard and consider using their model instead. The quality of the model is essential, and referencing the leaderboard can help you avoid models known to hallucinate more often than others.
  • Try refining your retriever and generator to improve results.

My previous articles on LLM concepts and summarizing chat conversations are also available to help provide even more context about the components we worked with in this article and how they are used to produce high-quality responses.

References

Categories: Others Tags:

5 Web Design Disasters That Will Ruin Your 2024

January 26th, 2024 No comments

Welcome to 2024, where web design is as fast-paced as ever. You know how crucial a killer website is nowadays — your online handshake, 24/7 billboard, and usually the first hello to your customers. But with all these rapid changes, it’s tough to keep up, right?

Categories: Designing, Others Tags:

CSS Blurry Shimmer Effect

January 25th, 2024 No comments

Imagine box-shadow but for a blur effect, where the backdrop of an element is blurred around that element, gradually decreasing the blur’s strength. I came up with the idea while trying to improve the contrast of a popup over a dark area where a box-shadow for the popup won’t make much sense, design-wise. I then thought, well, what other ways might create a good contrast effect? And so suddenly, the idea of a gradual blur effect around the object came to me.

See the Pen Faded Outer Box Backdrop Blur [forked] by Yair Even Or.

It would be awesome if we had a box-blur property or perhaps some sort of blur keyword we could set on box-shadow the way we do for inset shadows. Unfortunately, CSS has no such property. But because CSS is awesome and flexible, we can still get the effect by combining a few CSS features and hack it through.

What I’m going to show you from here on out is the thought process I took to create the effect. Sometimes, I find it easier to know what’s coming up rather than meandering through a narrative of twists and turns. So, for those of you who are like me and want to jump straight into the process, this was my approach.

Start With The Markup

The effect is approached in a way that it is applied to the ::before pseudo-element of some element, say some popup/dialog/popover/tooltip. Those are the common “targets” for this sort of effect. I think using a pseudo-element is a good approach here because it means we could technically scope the styles to the pseudo-element and re-purpose the effect on other elements without any HTML changes.

<!-- This is literally it for this demo -->
<div></div>

You can give the element a class, whatever dimensions you like, insert content and other child elements within it, or use a completely different element. The HTML isn’t the main ingredient for the secret sauce we’re making.

Position The Pseudo-Element

We want the ::before pseudo-element to occupy the entire area of the

element we’re using for this specific demo. Not only do we want it to cover the entire area, but even overflow it because that establishes the visible area, which holds the blur effect, so it will extend outwards.

::before {
content: ''; /* Make sure the parent element is at least relatively positioned to contain the pseudo-element. */ position: absolute; /* The blur size should be anything below 0 so it will extend to the outside. */ inset: -100px; /* This layer is positioned between the parent element and page background. */ /* Make sure this value is one below the z-index of the parent element. */ z-index: -1; }

The code comments spell out the key pieces. An empty string has to be set for the content property so the ::before will be rendered, then we take it out of the document flow by giving it absolute positioning. This allows us to inset the element’s position and is ultimately setting the blur effect directions as we would on the box-shadow property — only we’re using inset to control its size. We want a negative inset value, where the effect extends further the lower the value gets.

Until now, we’ve set the foundation for the effect. There’s nothing really to see just yet. Now, the fun begins!

Masking With Transparent Gradients

Gradients are technically images — generated by the browser — which can be used as CSS masks to hide parts of an element to create various shapes. You may have seen a few related Smashing Magazine articles where CSS masking has been showcased, such as this one by Temani Afif.

Transparency is the key thing when it comes to masking with gradients. Transparency allows us to gradually hide portions of an element in a way that creates the illusion of fading in or out.

That’s perfect in this case because we want the effect to be stronger, closer to the object, and fade in intensity as it gets further away.

We’ll use two gradients: one that goes horizontally and another that goes vertically. I chose this route because it mimics a rough rectangle shape that fades out towards the edges.

As I said, transparency is key. Both gradients start transparent, then transition to black until just before the end, where they go back to transparent to fade things out. Remember, these gradients are masks rather than background images, so they are declared on the mask property, which controls which pixels should be rendered and their opacity.

mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);

See the Pen Basic Gradient Mask [forked] by Yair Even Or.

  • The vertical gradient (to top) creates a fade from transparent at the bottom to black in the middle, then back to transparent at the top.
  • The horizontal gradient (to left) produces a fade from transparent on the right to black in the middle, then back to transparent on the left.

This dual-gradient approach positions the black regions, so they merge, creating the rough baseline of a rectangular shape that will be refined in the next step. The mask property is best declared as first prefixed and then un-prefixed to cover more browsers’ support:

-webkit-mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);
mask:
  linear-gradient(to top, transparent 0%, black 25% 75%, transparent 100%),
  linear-gradient(to left, transparent 0%, black 25% 75%, transparent 100%);

Refining Using The mask-composite Property

The mask-composite property is part of the CSS Masking Module and enables pixel-wise control over the blending of masked content, allowing for intricate compositions.

The source-in value of this property is very useful for the effect we are after because it tells the browser to only retain the overlapping areas of the mask, so only pixels that contain both (mentioned above) gradients will get rendered. This locks in a rectangle shape, which can then be applied on any DOM element that has none-to-moderately curved corners (border-radius).

Gradually Blurring The Backdrop

Now that we have a mask to work with, all we need to do is use it. The backdrop-filter CSS property can blur anything that is rendered “behind” an element using the blur() function:

::before {
  /* etc. */

  backdrop-filter: blur(10px);
}

The larger the value, the more intense the blur. I’m using 10px arbitrarily. In fact, we can variablize this stuff later to make the implementation even more flexible and easily configurable.

But wait! As it turns out, Safari requires a vendor-prefixed version of backdrop-filter to get it working there:

::before {
  /* etc. */

  -webkit-backdrop-filter: blur(10px); /* Required for Safari */
  backdrop-filter: blur(10px);
}

Note: It’s preferred to declare prefixed properties before the unprefixed variant so they serve as a fallback for browsers that don’t (yet) support them or their implementation is different.

A Touch of Synergistic Shadow

I think adding a slight semi-opaque black box-shadow that covers the blur area gives the effect a little extra depth. The only thing is that you’ll want to add it to the element itself rather than it’s ::before pseudo:

div {
  box-shadow: 0 0 40px #00000099;
}

That’s totally optional, though.

Bringing Everything Together

Here’s how the CSS comes out when we combine everything together.

/* This can be set on the ::before pseudo of the element it is applied to. */
::before {
content: ''; /* This layer is positioned between some element and its background. */ position: absolute; /* This should not affect the contents of the container. */ z-index: -1; /* The blur size should be anything below 0 so it will extend to the outside. */ inset: -100px; /* The blur effect */ -webkit-backdrop-filter: blur(10px); /* Required for safari */ backdrop-filter: blur(10px); /* A mask fades the blur effect, so it gets weaker. */ /* towards the edges, further from the container box. */ /* (The fill color is irrelevant, so "red" is used as it's the shortest color name.) */ mask: linear-gradient( to top, transparent 0%, red 100px calc(100% - 100px), transparent 100%), linear-gradient( to left, transparent 0%, red 100px calc(100% - 100px), transparent 100%); /* This merges the masks above so only the overlapping pixels are rendered. */ /* This creates the illusion of a fade-out mask. */ mask-composite: intersect; -webkit-mask-composite: source-in; /* Required for Safari */ }

The Final Demo, One More Time

See the Pen Faded Outer Box Backdrop Blur [forked] by Yair Even Or.

I’ve also prepared a simplified version with minimal code and no CSS variables that’s easier to read and re-purpose.

Categories: Others Tags:

Best Affiliate Programs for Beginners

January 25th, 2024 No comments

Affiliate marketing introduces a significant challenge for beginners: identifying the best affiliate programs.

Making the wrong choice among the many high-paying options might set you back for months. Conversely, the right affiliate program could have you earning your first bucks before your site is even finished.

Given the frequency of this question, this blog post is dedicated to exploring the best affiliate programs for beginners. We’ll also provide a balanced view of why opting for affiliate networks sometimes makes more sense than an in-house affiliate program, and vice versa.

Let’s dive into it.

Jotform Affiliate Program

Jotform, established as an online form builder, has garnered over 20 million users with its numerous features such as a PDF editor, report builder, and e-signature capabilities. With its user-friendly interface and extensive integrations, Jotform also allows users to generate income through its affiliate partnership program.

You can earn commissions from your content by sharing affiliate links and receiving a 30% commission for every new subscriber you bring in. You can easily calculate your potential earnings with the Jotform Commission Calculator.

Jotform gives you everything you need to boost your promotions, such as brand assets, helpful content, and a training package. These resources help you increase your referrals and improve your overall engagement.

Commission Rate: 30% 

Payment Duration: 60 day

Amazon Associates Program

Amazon’s Affiliate Program, a dominant force in e-commerce, stands as a lucrative opportunity for affiliates by connecting them to a global customer base and a diverse product range. Its user-friendly interface and global recognition make it an attractive option for newcomers and seasoned affiliates, promising substantial earnings.

Affiliates can earn commissions for every sale through their unique links, offering a flexible and scalable income. With millions of products across various categories, you have the flexibility to tailor promotions to diverse audience interests.

Commission Rate: It varies depending on the category, ranging from 1% to 40%.

Payment Duration: 60 day

Semrush

Semrush is one of the most popular SEO, competitor research, and content marketing tools on the market. It offers marketers a comprehensive suite of tools to do everything from keyword and backlink research to optimizing existing pages of content. That’s why you’ll find so many SEOs promoting it in their blog posts and YouTube channels. There’s also the tiny detail that Semrush loves rewarding its affiliates with lots of cash. You get $10 for every referral that signs up for a free trial and $200 for any referral that chooses a paid account.

This combination of powerful features and profitable affiliate rewards makes SEMrush an attractive choice for marketers looking to boost their SEO capabilities and earnings.

Commission Rate: $200 for every new sale and $10 for every free trial sign-up

Payment Duration: 120 days

ShareASale 

ShareASale, with a business history spanning over two decades, has become a notable player in affiliate marketing with +4,000 diverse marketers. Despite being acquired by Awin in 2017, ShareASale continues to operate independently, maintaining its brand identity. Although its interface may not be the most visually appealing or interactive, ShareASale distinguishes itself by offering an extensive array of affiliate programs, representing around 5,000 different options. These initiatives tend to especially concentrate on smaller but lucrative businesses, differing from the common practice of mainly showcasing large, well-known brands.

Also, an appealing aspect for new affiliates considering joining ShareASale is that this affiliate network is welcoming to beginners and won’t remove you from the network if you don’t generate any sales in the initial 180 days.

Commission Rate: Varying from 4% to 50%

Payment Duration: 30 days

CJ Affiliate

Commission Junction is reportedly the largest affiliate network globally. Its base represents thousands of advertisers across various industries. This diversity ensures that beginners can discover several programs to promote. An advantage for new affiliate marketers is the opportunity to collaborate with a dedicated affiliate manager in most instances. These professionals can provide valuable insights and advice, elevating your affiliate marketing career.

A distinctive feature of the CJ Affiliate network is that it provides clear visibility into the profitability of the affiliate programs. This transparency eliminates guesswork and allows marketers to strategically choose which programs to promote and estimate potential earnings per sale. This feature proves to be a significant time saver in planning and optimizing affiliate marketing efforts.

Commission Rate: Ranges from 3% to 50% or more

Payment Duration: 30 days

Shopify

Shopify is the online e-commerce platform transforming the way businesses enter the world of online sales. This online store maker removes the need for technical expertise. It also eliminates the hassle of hiring web developers and addresses the common challenges that hinder new online ventures. With Shopify, your visitors can easily choose from plans ranging between $29 to $299 per month, a mere fraction of the cost associated with custom e-commerce solutions. Its user-friendly approach ensures that businesses, especially those new to selling online, can set up shop without struggling over technical complexities or facing expenses.

The appeal of Shopify’s affiliate program lies in its generous 200% commission rate for steering new customers their way. This enticing figure includes a 100% commission on the initial two payments made by a new customer. As an affiliate, you also have the enticing option of earning recurring commissions from referrals and ‘Plus’ upgrades, making Shopify a profitable opportunity for those seeking substantial returns in the e-commerce industry.

Commission Rate: 100% on the first two payments

Payment Duration: Approximately every two weeks

ClickFunnels 

ClickFunnels is an all-in-one SaaS platform, combining sales, marketing, e-commerce, and analytics tools. These work together to automate sales funnels and empower entrepreneurs to monetize their ventures. Influencers can create courses, manage payments, and engage with the audience, all within the platform. Creators and consultants can build dynamic sales funnels for their e-books, incorporating cross-selling strategies. Small businesses can experiment with lead generation through A/B split testing, while subscription box businesses can use attractive landing pages for sales and order management. ClickFunnels provides a versatile solution for a streamlined and effective entrepreneurial journey.

ClickFunnels presents a rewarding opportunity with a generous 40% recurring commission on monthly accounts. Affiliates can also earn a 5% commission for each monthly account sold by those they refer. The platform further enhances its appeal with the Dream Car program, offering enticing bonus prizes to top-performing affiliates. This means that as an affiliate, you not only earn commissions but also stand a chance to unlock additional exciting rewards.

Commission Rate: Up to 30%

Payment Duration: 30 days

Bluehost

Since 2003, this company has positioned itself as a reliable option for affiliates and small businesses seeking affordable hosting. Bluehost presents a profitable affiliate opportunity for beginners. It stands out as one of the premier affiliate programs in this selection due to its earning potential providing a minimum of $65 per referral by link. This explains why the Bluehost affiliate program is frequently mentioned and endorsed across various marketing blogs.  Bluehost’s profitability aligns with the broader trend in web hosting affiliate programs for beginners. Its affiliate program serves as an excellent service for new bloggers.

Given that people need web hosting before establishing their website or blog, there’s a compelling rationale for directing aspiring bloggers and online entrepreneurs to Bluehost. In doing so, everyone involved stands to benefit, especially beginners.

Commission Rate: Ranges from $65 – $125 per sale

Payment Duration: 90 days

Etsy

The Etsy program is a unique blend of affiliate marketing. It focuses on independent crafters, vintage traders, and artists. While it primarily caters to traders, it’s distinct due to the variety of products on the platform. Etsy has quickly become the leading platform in its niche, surpassing competitors and transforming into a publicly traded company in a short time.

What’s interesting is its widespread appeal who have either bought or sold handmade items on Etsy. Through Etsy’s affiliate program, you can earn a 4% commission by linking to or reviewing products. Note that Etsy’s affiliate program is handled by Awin, so you must have an account with them before you begin promoting this opportunity.

Commission Rate: 4% per sale

Payment Duration: 30 days

ClickBank

ClickBank, founded in 1998, stands as one of the oldest affiliate networks. Despite not holding the same fame it once did, ClickBank remains a viable source of income for affiliates today. For those seeking to monetize a niche effortlessly, ClickBank offers multiple avenues. It includes display ads and affiliate programs. These options are user-friendly, making them accessible to both seasoned marketers and newcomers.

While ClickBank primarily features digital products in its Marketplace. Also, there is a selection of physical products available. An important point is the Marketplace interface. It is still functional and appears unchanged since its foundation in 1998.

Despite the interface aesthetics, exploring ClickBank’s Marketplace remains a profitable opportunity for affiliate marketers. Digital products, particularly in healthcare and beauty niches, can yield substantial commissions. Even those with limited experience in affiliate marketing can secure $100+ commissions for a single referral, making ClickBank a great option for beginners.

Commission Rate: Ranges from 10% to 20%

Payment Duration: Bi-weekly or monthly

Elementor

Continuing our exploration of top affiliate marketing programs for beginners, Elementor is a popular WordPress page builder currently leading the web market. Elementor has offered the art of website creation by placing the ability to craft visually appealing websites in the hands of thousands who may lack coding skills. Consequently, Elementor powers a whopping 8 million websites across diverse industries, so, a testament to its widespread appeal and effectiveness.

While there are numerous WordPress page builders available, Elementor stands out for its exceptional marketing efforts, ensuring that you can confidently recommend a product that lives up to its promises. Adding to that, with strong support from the marketing team, constant feature updates, and a highly engaged Facebook community, it’s a product that practically sells itself.

Elementor’s affiliate program not only aligns with your conscience, knowing you’re endorsing a reliable product, but it also offers a substantial financial incentive for each successful referral.

Commission Rate: Starts at $49 for a year and commissions of 50%

Payment Duration: 30 days

Conclusion

If you are a beginner in affiliate marketing and need help deciding which partnership program is the safest and most reliable, this was a comprehensive exploration of the top affiliate marketing programs suitable for beginners.

We’ve covered substantial resources here, offering you a variety of affiliate program choices to consider.

If you already have some traffic, you can start earning passive income from your visitors practically tomorrow, with payments arriving in the next 30 days.

Here’s to a successful beginning!

The post Best Affiliate Programs for Beginners appeared first on noupe.

Categories: Others Tags:

14 Best SEO Tools in 2024

January 24th, 2024 No comments

There is an ongoing debate about whether design is more art or more science. SEO is 100% alchemy; if you know where to look and how to interpret what you find, anyone can turn underperforming websites into gold.

Categories: Designing, Others Tags:

The AI Dilemma In Graphic Design: Steering Towards Excellence In Typography And Beyond

January 23rd, 2024 No comments

Imagine it’s 2028, and you’re at your projected, suped-up workstation. “Hey AI,” you say, “I need some type options for this page heading…” Before finishing, your AI assistant, affectionately nicknamed TypeMaster3000, eagerly interrupts: “Something bold yet whimsical? Or Perhaps a serif that subtly says, ‘I’m sophisticated but know how to party’?”

You roll your eyes, “Just show me ten options. And no disco serifs this time.”

Gone are the days of clunky, AI-generated fonts that struggled to produce consistent, quality designs. Licensing issues? A thing of the past. The AI of 2028 presents you with multilingual, inventive font families, each glyph crafted to perfection. But perfection isn’t without its quirks.

As TypeMaster3000 rolls out its instantly generated font options, each design seems to have a touch of your recent seaside holiday. There’s Sandy Serif and Desert Island Display.

You sigh. “Less beach, more business, please.”

“Understood,” TypeMaster3000 chirps. “Reverting to corporate mode!”

You spot a typeface you like, and with a tap, the font slots into your design, aligning proportionally and positionally as if it was always meant to be there.

The Evolution of Technology In Typography

Back in the present, the creation of new, professional typefaces remains a meticulous and time-consuming endeavor, even with modern software. Throughout its history, the type industry has consistently been at the forefront of technological evolution, from wood to metal, film, and digital.

Each innovation has transformed type production and broadened access for designers, both in making and using type. Like all industries, we are poised at the base camp of the AI mountain, bracing ourselves for a steep and transformative climb.

Predictions of the medium-term impact of artificial intelligence on type design generally converge around two main scenarios:

  1. AI as a collaborative tool (AI as co-Pilot)
    In this scenario, AI assists in the type design process, taking on time-consuming tasks like creating bold versions or italics of a typeface. This would benefit type designers by streamlining their workflow and allowing more room for creative exploration without the burden of repetitive tasks.
  2. Fully AI-Generated Fonts (AI as autopilot)
    As with our TypeMaster3000 scenario, AI would independently create fonts in this scenario, likely resulting in a surge of free, enthusiast-prompted typefaces. Initially, these may lack the innovation, consistency, and craftsmanship of professional designs, so the market will likely lean towards more dependable, expertly generated AI fonts.

Over time, however, it is expected that we will gravitate towards autopilot fonts, as even naively prompted AI results (e.g., “Make me a seaside holiday font”) will begin to match, then surpass, human-made efforts. Both scenarios seem like good news for type users, offering a wider variety of fonts. But this change will completely disrupt the type industry.

A Gutenberg-Scale Transformation

Yet, this vision is far from the summit of our AI mountain. While disruptive, it marks a crucial and necessary transition for the type industry towards a groundbreaking future. While the journey may be challenging, AI is poised not just to generate innovative fonts but to fundamentally revolutionise our text communication, paving the way for a new era of dynamic and interactive typography.

Despite previous technological advances, typography actually hasn’t changed much since its invention almost 600 years ago, and much scribal creativity was sacrificed to make text more accessible. The next evolutionary step will be dynamic, context-sensitive typefaces. These would provide more nuanced and precise forms of communication, tailoring text to specific contexts and user needs.

This typographic revolution will significantly benefit our global civilization and should be our ultimate aim.

Current Advances In The AI Revolution

AI image generation, especially in deep learning, is advancing fast. Focussed mainly on pixel-based imagery, it achieves impressive results. These are created using neural networks to manipulate individual pixels, like creating a digital mosaic. Yet vector graphics, integral to font creation, are moving at a slower pace, with only a handful of papers surfacing in 2023.

Vector graphics, defined by Bézier curves, present a more complex challenge for neural encoding due to their algorithmic nature. Yet, there’s growing momentum in adapting language model techniques to this field, showing promise for more sophisticated applications.

One area of notable progress is style-transfer research, where AI learns and applies the style of one image to another. This would be like fusing a Roundhand script style into a modern Sans Serif, creating something like Helvetica with swashes and stroke contrast.

Significant strides are also being made in so-called few-shot font generation tasks, which involve AI learning a font’s style from a few initial characters and then extrapolating it to generate a complete font.

This approach has enormous commercial and creative potential, especially for designing multilingual fonts and those with huge character sets like Japanese and Chinese fonts.

While AI’s potential for producing vector graphics and typography is still in the early stages, the current direction shows a promising future, gradually overcoming the complexities and opening new avenues for designers.

Guiding the Future: The Crucial Role Of Designers In AI Typography

Given this trajectory and the lofty claims of what AI may do in the future, creative professionals are rightly contemplating its short-term implications. Designers are increasingly concerned that their specialised skills, including typography, might be overlooked in a landscape filled with AI-aided enthusiasts.

To preserve our creative integrity and professional effectiveness, it’s crucial for designers to influence the development of AI tools and insist on high design standards to positively shape the future of our industry.

Despite initial fears and controversies, Gutenberg’s press became one of history’s most transformative inventions. AI, too, holds a similar potential, but its direction depends on our approach.

The Designer’s Dilemma: Embracing AI While Maintaining Quality

We face a choice: harness artificial intelligence to boost our creativity and efficiency or risk allowing naive automation to erode the quality of our work. Rather than being passive spectators, we must actively steer AI advancements toward quality-driven outcomes, ensuring these tools enhance rather than diminish our design capabilities.

It has been noted that designers can harness AI tools more effectively because they possess a deep understanding of how to construct an idea. But embracing these new tools doesn’t mean relaxing our guard and allowing standards to be set for us. Instead, we should use AI as a springboard for inspiration and innovation.

For example, current AI-generated imagery often yields unexpected results due to a combination of unwieldy text prompts and massive data sets. But it can be an effective tool for inspiration and to spark new ideas.

Holding The Line In AI Typography

In typography, designers will need to be more vigilant when selecting typefaces. A flood of potentially original and inventive amateur fonts may flood the market, requiring more than just surface-level assessment of their quality. Designers will need to check their character sets, spacing, and overall design more carefully.

Using typefaces skillfully is more important than ever, as it will not only make work stand out but also influence industry trends and standards to inspire and guide type designers.

Adapting To AI In Type Design

The development and direction of AI tools don’t need to be solely in the hands of large corporations investing billions into the technology. A positive step forward would be for type-foundries to collaborate, pooling their resources to create a collective AI software model. This cooperative approach would enable them to not only capitalise on AI-driven innovations but also safeguard their unique designs from unauthorised use by others.

Furthermore, research indicates that smaller AI models can sometimes outperform their larger counterparts, opening doors for independent foundries to develop custom, small-scale AI tools tailored to their specific needs.

Designers Shaping the Future: From Static Typography To AI-Driven Innovation

While a wave of mixed-quality amateur fonts is a concern, AI is poised to significantly enhance the quality and innovation of professionally crafted typefaces. In partnership with developers, type designers will lead the next evolution of type.

What we’ve become used to in terms of typography is woefully static, lacking the ability to dynamically adjust to content, context, or reader interaction. At present, our options are limited to changing font styles and incorporating emojis.

Historically, scribes were adept at creating text with emphasis and embellishments, enriching the transfer of information. When Johannes Gutenberg invented his printing press, his goal wasn’t to surpass scribes’ artistry but to bring knowledge and information to the masses. Gutenberg succeeded as far as that is concerned, but it left behind the scribes’ nuanced abilities to visually enhance the text, even if the type has evolved creatively along the way.

Typography’s Destiny

The next revolution in typography ought to usher in an era of fluidity, adaptability, and interactivity in textual presentation. The type ought to act more like custom lettering. This shift would significantly enhance the reader’s experience, making written communication more versatile, precise, and responsive to various factors such as:

  • Content sensitivity
    Text might change based on the content it’s displaying. For example, changing style and rhythm for the climax of a book or floating playfully when reading an uplifting poem.
  • Environmental adaptability
    Text changes in response to different lighting or the reader’s distance from the text.
  • Emotional expression
    Incorporating elements that change based on the emotional tone of the text, like color shifts or subtle animations for expressive communication.
  • User interaction
    Text could vary depending on the user’s reading speed, eye movement, or even emotional responses detected through biometric sensors.
  • Device and platform responsiveness
    We could have text adapted for optimal readability, considering factors like screen size, resolution, and orientation without having to “guess” in CSS.
  • Accessibility Enhancements
    Imagine situations where text dynamically adjusts in size and contrast to accommodate young, dyslexic, or those with visual impairments.
  • Language and cultural adaptation
    For example, a type could effortlessly transition between languages and scripts while maintaining the design’s typographic intention and adapting sensitively to cultural nuances.

Conclusion: Embracing The Future Of Design

We stand at the threshold of a monumental shift in typography. Like every industry, we’re entering a period of significant transformation. Future scenarios like TypeMaster3000 show how turbulent the journey will be for the industry. But it is a journey worth making to push beyond the barriers of static type, advance our creative capabilities, and foster better communication across cultures.

Change is coming, and as designers, it’s not enough to merely accept that change; we must actively steer it, applying our expertise, taste, and judgment. It’s crucial that we collectively guide the integration of AI in typography to do more than automate — we must aim to elevate. Our goal is dynamic, precise, and contextually responsive typography that transcends the static utility of fonts.

By guiding AI with our collective creativity and insights, we can not only augment our creativity but raise design standards and enrich our entire civilization.

Categories: Others Tags: