Perplexity’s $3 Billion Valuation: What it Means for AI Search’s Future
Understand the significance of Perplexity's latest valuation and its impact on the AI industry.
Perplexity is not just a name challenging Google; it's redefining information retrieval with its powerful AI search engine.
The company now has over 20 million monthly active users and an ARR (Annual Recurring Revenue) of over $20 million. By the end of the year, it aims for around 100 million monthly active users and $100 million ARR. Many other companies are still in the pre-PMF (Product-Market Fit) stage.
Founded in August 2022, Perplexity's CEO, an Indian-origin Berkeley CS PhD, interned at OpenAI and DeepMind before founding Perplexity after a year at OpenAI.
The CTO was an engineer at Microsoft Bing and later worked at Quora and Meta AI before joining Perplexity. The team consists of about 50 people.
In its early days, prominent AI figures like Google SVP Jeff Dean, Meta's Yann LeCun, OpenAI's Pieter Abbeel and Andrej Karpathy, Solo GP Elad Gil, GitHub's former CEO Nat Friedman, Databricks' Co-founder Reynold Xin, and Hugging Face's CEO invested in Perplexity. Amazon's founder Jeff Bezos, an early Google investor, also invested personally.
Perplexity completed its Series A funding in early 2023 and recently finished its fourth round, led by Bessemer, reaching a $3 billion valuation.
Teams like Perplexity, with deep connections to OpenAI, might eventually be acquired by OpenAI.
Perplexity's latest valuation has reached $3 billion.
Search is still the largest and most profitable market on the internet.
Microsoft Bing holds only 3.4% of the global search market but generates $12 billion in annual revenue. If AI search can capture just 1%-2% of Google's share, it would be a substantial business.
AI search should be the biggest killer app for large models in their early stages. Perplexity users perform over three times more queries per day than Google users.
Perplexity has high long-term retention, significantly higher than other AI products.
Perplexity's data growth has been consistently strong, with rapid product iterations and stable growth, maintaining a monthly growth rate of over 20%.
Reviewing the past year in AI, information retrieval is the most important use case that matches model capabilities. Education is also a significant use case that fits today's model abilities.
Understanding today's model capabilities helps in realizing how well these products fit.
When evaluating PMF (Product-Market Fit) today, two key factors are retention and commercialization.
Perplexity excels in retention, but its commercialization isn't as efficient as ad platforms.
This issue is common for AI products today. The ad model was well-established in the PC era, making it easy to adapt to mobile.
Perplexity's retention is strong, largely due to product perception.
AI products today lack strong network effects, economies of scale, and data flywheels. Thus, the focus is on user perception.
Perplexity owns the perception for AI search, ChatGPT for chat, and Character AI for companionship.
It's similar to how Google captured the perception of search early on.
AI search has potential for the future. With agents becoming more practical, search remains the best application.
It helps users with personalized knowledge, complex questions, and actionable insights, aiding in decision-making.
Why did Perplexity succeed?
A key insight from this process is understanding which use cases large models can best support.
Large model capabilities are unlocked gradually.
As a knowledge worker, I engage in three types of creative tasks: combinatorial, exploratory, and transformative creation.
Information retrieval is a combinatorial task, which aligns well with current model capabilities.
Long-distance reasoning tasks, like writing a Tesla investment report, are exploratory. They need planning and various types of reasoning.
Transformative creation includes scientific discoveries like gravity and friction laws. Current models struggle with such tasks and handling unseen problems.
Perplexity's team has a deep understanding of current model limitations. Models today excel at combining information, not free exploration.
Today's model capabilities perfectly match Perplexity's use case for complex information queries.
Perplexity Launches "Pages"
This revolutionary tool turns research topics into clear, attractive articles. With a single click, users can publish their articles to Perplexity's content library and share them with a global audience instantly.
The style is fresh, clean, and well-organized. When I first saw it, it reminded me of Notion rather than traditional Wikipedia pages.
Perplexity Pages can display an AI-generated banner image, which users can replace if they wish.
To launch Perplexity Pages, go to the Perplexity console, hover over the "Library" tab, and you will see the new "Pages" option.
Please note that this feature is currently available only to paid users. If you don't see the "Pages" option under the "Library" tab, you may need to check your subscription status.
Choosing a Topic and Audience: Enter the topic you want to explore and select the appropriate audience. Options include:
Anyone: Easy to understand, suitable for all audiences.
Beginners: Designed for newcomers with no prior knowledge.
Experts: Uses professional terminology and covers complex concepts.
Should we think about models first or products first?
Perplexity fine-tuned a Mistral-7B model, which covers many queries effectively.
This decision was driven by cost considerations and the need for Perplexity to modify models independently, as they can't alter GPT or Claude models. Thus, they fine-tuned a smaller model for end-to-end optimization.
If I were to create an application today, I would first focus on achieving PMF (Product-Market Fit) using the best available model. Then, I would look for ways to reduce costs and eventually optimize end-to-end, possibly distilling a smaller model to handle specific, popular queries. This approach seems practical.
For a specialized assistant application, I would use GPT-4 to achieve PMF, then reduce costs and develop my own model to make necessary modifications.
Distilling a model, such as from 1T data, is not very expensive and is cheaper than training a model from scratch. Owning a custom model could be beneficial for many application companies in the future.
Perplexity must enhance its model capabilities and consider deep collaborations with a model company or a major player.
How does Perplexity compete with Google and OpenAI?
I think the answer might be differentiated competition because Perplexity is not exactly the same product or technology.
Differentiated competition is important. Startups should understand it well since directly competing with giants is tough.
Perplexity focuses on an area where LLMs excel: knowledge workers, particularly in complex information Q&A, related to business and local life.
Google excels at web navigation and transactional information search.
Perplexity addresses about 5% of complex queries in traditional search, which traditional search struggles with. These complex questions are like the "crown jewels" of search.
Competing with OpenAI depends on OpenAI itself.
OpenAI still has a research-driven culture. Balancing this with a product-focused culture is crucial.
Companies like ByteDance or Meta have strong product and commercial cultures.
For example, if a search product team wants to change model data, they need the model team's agreement. At OpenAI, the model team might resist altering foundational data.
OpenAI might have 10 people working on search, while Perplexity has 50 dedicated to this area, potentially making Perplexity equally competitive.
Future product forms will also evolve.
People naturally expect instant answers from search and chat. However, many work tasks are not immediate. Collaboration with colleagues often involves multi-step planning and may take a week for results.
Breaking complex questions into multi-step, iterative searches for better answers is a potential future state.
In the future, many user tasks will be managed on a dashboard, with users continuously prompting the AI.
This workflow will run on an AI product, which is an interesting future, especially as agents become more integrated.
Will Perplexity be acquired in the future?
There's an 80% chance it will be acquired. Startups today seem to be under the influence of tech giants and struggle to break free.
Recently, I bought Meta glasses. Integrating a voice-activated Perplexity into Meta glasses would be great, especially since Siri's search hasn't performed well.
Microsoft's Bing hasn't gained a strong reputation despite its AI efforts. In contrast, Perplexity has a good reputation, making it a key player under the influence of giants and a crucial battleground.
AI Startup Commercial Rankings
OpenAI is currently the leader, with ChatGPT having over 100 million daily active users, which is impressive.
The Information reported that OpenAI's ARR (Annual Recurring Revenue) is already $3.4 billion.
We reviewed other AI companies in Silicon Valley and found their combined ARR is less than $1.5 billion, which is less than half of OpenAI's.
Anthropic focuses entirely on enterprise clients and has a smaller consumer base. Its revenue is about one-tenth of OpenAI's, around $300 million.
Character AI has 6-7 million daily active users and is valued between $3 billion and $5 billion.
Recently popular AI programmers are valued at $2 billion, and several new companies aiming to develop agents, like Reflection, are valued at several hundred million dollars.
There are around 10 unicorn companies, but most are still pre-PMF (Product-Market Fit). Very few have a significant ARR, with over $10 million being rare.
Large model companies have notably high valuations.
OpenAI is valued at around $100 billion, while Anthropic and Elon Musk's xAI are valued at $20 billion. Mistral is valued at $5-6 billion, Character AI at $3-5 billion, and Cohere at $4-5 billion.
Among AI-native applications, several unicorns stand out, but many companies have valuations in the low billions. Perplexity is one of the highest-valued pure application companies, not involved in model building.
AWS is deeply tied with Anthropic, Microsoft with OpenAI, and Elon Musk has independently raised $6 billion for xAI and may raise more.
Character AI may find it difficult to raise a few more billion dollars, while Mistral recently raised $500-600 million.
The only three independent companies are OpenAI, Anthropic, and xAI. These three have 32,000 GPU clusters this year, aiming for 100,000 next year.
Without substantial financial backing for billion-dollar funding, competing in large models is challenging.
In the next 6-12 months, it will be interesting to see what Apple and Meta choose to do. Warren Buffett has pushed for Apple to buy back $110 billion worth of shares.
If it were up to me, I think Apple should acquire a company. Meta's Llama team lacks sufficient talent density, even though they have strong GPU clusters and capabilities. Elon Musk secured the last "ticket."
Meta has many GPUs and strong cluster capabilities, but talent-wise, they are clearly behind the leading model companies.
Elon Musk's xAI has backing from Sequoia Capital, a16z, and other funds, each investing around $500 million, which is a significant amount for a VC fund.
Has there been an AI-native application boom?
No, not yet. It's been a year since GPT-4 was released, and there hasn't been a major breakthrough in AI-native applications.
90% of the reason is that GPT-4's capabilities are limited. It can only innovate through information combination, not long-distance reasoning or creative tasks. The next generation of models, especially those with better reasoning and multimodal abilities, is needed.
The remaining 10% is a matter of time. Future major applications might still emerge based on GPT-4's capabilities.
NLP has been around for 20 years, and although not fully mature, it has produced killer apps like search engines. Similarly, electricity initially only gave us the light bulb, but over time led to various consumer electronics and home appliances.
After over a year of refinement, some AI applications are nearing PMF. This process needs young product geniuses.
Interviews with users, developers, and enterprise customers revealed that they focus on three factors: model capability, cost, and speed/latency.
Assuming model capabilities remain the same at the GPT-4 level, a 3-5 times speed increase and a 100 times cost reduction could spur significant innovation. GPT-4 Turbo is already 3-4 times faster than GPT-4. If costs drop to 1% of today's GPT-4, many new applications could emerge.
OpenAI recently launched a ChatGPT desktop client. It's very convenient, with quick access via hotkeys, making Chrome and Google searches less necessary for my information retrieval.
ChatGPT already has over 100 million DAUs. With improved desktop and mobile clients, reaching 300-500 million DAUs is feasible. At that point, OpenAI could significantly challenge Google.
Cost reduction is certain, but not necessarily the most important factor. Ultimately, it depends on the economic value of the model.
Not all tokens have the same value. The quality of tokens determines the model's business model. For instance, if ChatGPT can advise on stock purchases or sales, its responses are more valuable than a list of search results or reports.
The value per token is crucial and is determined by the model's capabilities, which reflects the model's overall utility.