Over the past two years, there has been considerable discussion in the venture community and broader technology ecosystem about the coming era of AI-native applications. We have seen many frameworks outlining the progression of value capture from the infrastructure to platform to application layers, a surge of interest in the evolution of AI agents and countless deep dives on pick-your-favorite vertical AI.
However, what’s been less explored are the distinct characteristics that will define AI-native companies and the ways the enterprise software landscape will evolve. In other words, what truly constitutes an ‘AI-native’ application, and what are the competitive implications for such companies in the years ahead?
This is our attempt to address those questions directly, blending real-time observations from the field and forward-looking insights based on emerging market trends. Our conclusions are drawn from extensive research, hands-on product testing and demos, direct collaboration with startup founders and, critically, numerous in-depth discussions with product and technical leaders across companies of all sizes who are actively building with Generative AI.
Although the current wave of excitement and investment in GenAI began with an application (ChatGPT) in November 2022, the application layer has generally lagged behind other parts of the stack. Many critics of GenAI point to the absence of killer apps as proof that the technology has been overhyped. However, there’s growing evidence that the application layer is starting to catch up.
Funding for GenAI native applications surged in 2024, reaching $8.5B through the end of October and capturing a larger share of overall GenAI investment when compared to the last two years. In the last few months, we have seen announcements of major financing rounds by Perplexity ($500M), Poolside ($500M), Magic ($320M), Sierra ($175M), Abridge ($250M), Glean* ($260M), Writer ($200M) and EvenUp ($135M). While investment levels can signal market exuberance, the real proof lies in traction—and here, the momentum is unmistakable. Over the past few months, a growing number of AI-native applications have started to show significant revenue traction. By our count, there are now at least 47 AI-native applications in the market generating $25M+ in ARR vs. 34 at the beginning of the year, and we believe we will likely see an equivalent number north of $50M ARR by this time next year. Early successes in fields like code assistance, customer support, and marketing are expanding into new functions and vertical use cases. Startups and incumbents alike continue to announce ambitious AI-native offerings. Simply put, today’s ecosystem of AI-native applications is much more robust than it was a year, or even a quarter ago, in our opinion.
Sign up for our newsletter
Defining AI-Native Applications
In our experience, an AI-native application means AI is central to the application’s experience, not just a supplementary feature. While the term is widely used, it’s definition—like much in the AI space—is loose and evolving. Drawing from our extensive research and conversations with builders at companies like Canva, Glean*, Meta, Runway, Figma, Abridge and others, we have developed what we believe is a bit more descriptive and nuanced definition:
- AI-native applications are built on foundational AI capabilities, like learning from large datasets, understanding context, or generating novel outputs.
- AI-native applications deliver outcomes that break traditional constraints of speed, scale and cost, enabling entirely new possibilities.
- AI-native applications are designed to continuously improve, both by leveraging advancements in underlying models and through feedback loops that refine performance using real-world data.
- AI-native applications involve some element of proprietary AI technology vs. 100% off-the-shelf capabilities (e.g., fine-tuning open-source models to improve specific features, model orchestration).
AI-native does not imply that applications must have started with GenAI capabilities. Just as some category defining software franchises from the on-prem era successfully transitioned to cloud-native versions of their products (e.g., Adobe Photoshop, Microsoft Office), we believe many companies can evolve from cloud-native to AI-native over time.
AI-Native: A Temporary Label for an Evolving Landscape
Last points before we get into the heart of the piece.
The term AI-native, to the extent it is a helpful distinction today, will be a temporal one. Just as we rarely say internet-native, cloud-native or mobile-native anymore, the AI-moniker will fade away as AI becomes an assumed core component of nearly every product and service in the market. We use it now in this early phase of the adoption cycle to delineate between the companies moving fast to augment and/or extend existing products vs. those building from the ground up with a net-new set of capabilities and assumptions. Over time, the lines will blur, with the starting point mattering less than how deeply AI-first companies become in the way they build both their products and their organizations.
Even in a more AI-native world, the fundamental drivers of value creation do not change. Companies must deeply understand customer pain points and build products and services that meet and surpass customer needs. Strong entrepreneurs must build great teams and execute relentlessly. AI, no matter how advanced it gets, is just a tool in service of those ambitions, not the proverbial hammer in search of its next nail.
Everything we do must be anchored very heavily on the top jobs to be done, the top personas and the top pain points that we need to solve for our customers. Our customers are our main source of innovation.
Emrecan Dogan
Head of Product at Glean*
A Framework for Evaluating AI-Native Applications
Now that we have cleared up what is unlikely to change, it’s time to discuss how things might be new and different going forward. Since early this year, we at Sapphire have used a five-part dimensional framework to evaluate companies building applications with AI.
It is our view that companies will need to differentiate their applications across several of these dimensions to establish durable category leadership, given the already increasing competitive intensity within enterprise software, coupled with the now rapidly decreasing half-life of feature differentiation wrought by the era of AI-assisted development teams.
We will discuss each in turn.
Design
Enterprise software has grown into a multi-trillion-dollar market with vast sums spent annually on solutions that drive operational efficiency and scale. Yet, despite this growth, user-centric design has long been neglected, often sacrificed for functionality over form. The typical experience is cluttered with endless settings, complex menus and frequent notifications—functional, but rarely delightful. We believe this is set to change, with design emerging as a key differentiator for the next generation of enterprise software. GenAI is already opening the design space, and we are closely watching how the technology will enable builders to:
Create New Interaction Models
Over the past two years, chat and search interfaces have been the dominant form of GenAI UI, offering users new opportunities to interact with their data—asking questions, synthesizing, summarizing and brainstorming, among countless other use cases, with a text-based AI companion. GenAI-native user interfaces will allow users to interact more naturally with their applications, while also providing access to advanced features that were previously limited to power users.
Many enterprise tools are filled with powerful features that most users rarely access, either because they are unaware of them or don’t know how to use them effectively. By using natural language–through text or voice–to express their needs, users can unlock more of these existing product capabilities. Multi-modal GenAI models have recently been catching up to their text-based counterparts, which is creating more opportunities for builders to rethink how users can work with software. More performant voice and video models present new ways to create, capture and transform inputs and outputs that complement clicking and typing. Recent releases from OpenAI, introducing Canvas, and from Anthropic, with compute use, though incredibly nascent, point towards the potential evolution from chat-bots to co-creation canvases and from co-pilots to auto-pilots.
Multi-modality is extremely important, not only for delivering rich and varied outputs, but because the inputs and how we interact with these tools will continue to evolve. Models and tools that are flexible, expressive and have the ability to understand and generate across a variety of modalities are going to be incredibly important as we look to the future and as the technology continues to improve.
Accelerate Feedback Loops
The non-deterministic nature of GenAI outputs can create challenges for deploying capabilities into production. At the model layer, RLHF has been critical in better aligning AI with human intent and in accelerating iterative improvement. Companies building GenAI applications are both borrowing some of those training techniques and discovering new ways to collect user feedback that allows them to better tune performance and increase the velocity of feature development. In our discussions with product leaders, we heard examples of classic “up/down” and star-based ratings on outputs, human-in-the-loop reviewers and, most interestingly, many creative means of monitoring engagement to collect intent signal (e.g., shares, hover time, content recency and frequency of engagement, copy + pastes).
We believe that products that can intelligently integrate feedback into the user experience in non-disruptive ways will iterate faster and refine their performance to better meet user needs over time.
Develop AI-Native Systems
One of the strongest takeaways from our interactions with AI-native application companies is their sophisticated systems-level thinking in application design. This includes balancing off-the-shelf commodity AI components with building proprietary capabilities that optimize performance for their specific use cases. It also involves using a mix of grounding techniques (e.g., fine-tuning, RAG, prompt engineering) and an ensemble approach at the model layer to achieve the best price-performance tradeoff at the query level. The elegance of the design at the UI layer for many AI-native applications is masking an incredible amount of backend complexity.
The acumen associated with building an AI system is not which model you use, it’s breaking down the task, going deep into the application, taking it to its constituent parts and asking which parts do we not have to build because an open-source model can do it, which are we uniquely positioned to fine-tune, which are more subjective…you have to earn the right to be proprietary.
Zachary Lipton
CTO at Abridge
Another critical aspect in designing an AI-native system is integrating explainability into multiple steps of the process. As we transition from an era focused on actions and assistance to one of answers and agents, it becomes crucial to explain how the AI works on behalf of users to build trust and ensure alignment. AI-native applications must clearly tie input to output, cite specific pieces of content, include confidence intervals when appropriate and offer deeper levels of explanation to users who need to interrogate system performance.
Some of the more interesting expressions of AI-native application design we have seen include:
- Perplexity.ai and OpenAI’s ChatGPT Search augment generated responses by integrating relevant web links and citations.
- Hebbia and Reliant AI use tabular UI’s that establish strong feedback loops back to model by enabling users to interact with output on a more targeted, granular level.
- Cognition offers a native code editor that users can work in directly alongside generated output, helping finetune the model on preferred coding practices.
- Rilla and Bland.ai use multimodal AI to reinvent sales processes. Rilla analyzes customer conversations to improve training (speech-to-text), while Bland.ai automates sales and support calls with easily trained digital agents (text-to-speech).
Data
It is well understood how critical data is to training the foundational models that underpin all the AI products and services that have hit the market over the last two years. We believe data is just as critical at the application layer, if not more so, as it helps transform commoditized base capabilities across modalities into targeted defensible products that meet customer needs. As we consider how companies can action data in AI-native applications, we look for evidence that they have identified ways to:
Increase Rigor of End-to-End Data Management
It’s almost a cliché to say there is no AI strategy without data strategy, but we believe it’s true. While AI-native applications benefit from foundational model companies incorporating global data into their products and from customers modernizing their broader data estates, they can still differentiate through strong data management best practices.
This includes data procurement and curation, data quality, data governance and data security. Furthermore, as multi-modal model capabilities advance, the ability to work across both structured and unstructured data will become imperative to maximizing GenAI’s potential. We believe companies that more intelligently and quickly collect, clean and integrate data in a secure manner will win.
Leverage Latent Data
Many companies have known for years that their data has value; however, actioning that knowledge has remained a challenge for most organizations. In our conversations with GenAI company leaders, a consistent theme was how much their products unlock customer data that had either been sitting dormant (e.g., on Box/Google Drive/SharePoint) or was not being captured in systems at all (e.g., customer calls, patient discussions, meeting notes). The benefits are significant, enabling users to:
- More seamlessly engage with existing data
- More quickly access the optimal content for their role and specific need
- Bring more structure and a defined taxonomy to data assets
Each of these lead to a better understanding of how to use company data to create business value and serve as strong motivators to 1) optimize data architectures to support more AI investment (see above) and 2) give more data to trusted AI-native application partners that are demonstrating early ROI.
We translate unwritten knowledge—like standard operating procedures that may be living in employee’s heads but not written down and insights from customer conversations—into structured data that teams can actually use and scale.
Tony Stoyanov
Co-founder & CTO at EliseAI*
Create New, Proprietary Data Sets
Moving beyond existing data, GenAI promises the potential to capture net new data sets, which can form the basis of competitive differentiation relative to incumbent applications. This will take many forms, including multi-modal engagement data, metadata on AI generated content creation and consumption, pattern recognition in data at both the micro and macro-level, to name just a few. The consistent through-line here is that this data does not exist in an incumbent system. It therefore represents an opportunity for AI-native companies to capture it, create a consolidation point around it and build differentiated workflows that extend its value.
That new data and understanding of the user workflow can then be turned into training data to iteratively improve an underlying model’s performance, thereby extending the competitive advantage of AI-native challengers. This shift underscores that it is no longer simply the quantum of data that imparts advantage.
Gone are the days where having “the most data” as an incumbent platform drove the greatest technical moat. There’s a clear shift in prioritization now, going from data quantity to its quality, privacy, and application.
Vaibhav Nivargi
Founder & CTO at Moveworks*
Examples of AI-native applications demonstrating strong data management and utilization include:
- Glean* trains custom LLMs and builds organization-specific knowledge graphs, using real-time feedback to deliver personalized, contextually relevant search results for each user.
- Writer leverages specialized LLMs to gain a deep understanding of semantic relationships across enterprise data and retrieve relevant, context-aware results for any search or application query.
- Jeeva.ai* unifies sales prospecting data from multiple sources “in flight” to enable users to granularly define their ICP, quickly build prospect lists with accurate enrichment data and generate highly-personalized messaging to automate engagement.
Domain Expertise
As we alluded to earlier, there has been much discussion and excitement around vertical AI applications over the past year. The interest is warranted as industry-specific AI-native applications have been among the fastest to scale, with notable examples in legal, healthcare, real estate and financial services to name a few. We think GenAI’s ability to express deep domain understanding–not only in specific product interactions, but also in end-to-end workflows–is an exciting development that will have a major impact on both vertical and horizontal software categories. When evaluating this dimension, we are looking at an application’s ability to:
Translate Domain-Specific Activity into AI-Accelerated Workflows
One reason we believe vertical AI is taking off so quickly is that GenAI is proving to be incredibly capable at creating better digital representations of end-user activity within specific domains. In our conversations with founders and product leaders, we heard many examples of this in practice, including more accurate translation/transcription of customer conversations (e.g., in doctor – patient discussions), more robust summaries of research inputs (e.g., in legal research and financial analysis), more refined understanding of user-to-user and user-to-entity relationships (e.g., in enterprise search). In all these instances, and many more, GenAI models are being trained to deeply understand the industry or function-specific context around an individual task and then automate actions that get users to faster and more efficient outcomes.
Domain expertise doesn’t always need to be industry specific. We have spoken with several product and engineering leaders who describe studying power users and senior leaders within customer organizations to codify their usage patterns. By translating these patterns into prompts and structured outputs, they aim to make these insights accessible across all levels of the organization. Supio’s* Head of Product, Pamela Wickersham, describes it as “observing what really sophisticated users of the platform are doing and making that repeatable for other roles and personas at different layers of the firm.” In this way, GenAI applications, sitting on top of foundational models fine-tuned with company-specific data, can enable the type of knowledge transfer that can uplevel an entire employee base.
Synthesize at Scale, with Speed
Another strength of AI-native applications is their ability to derive insights from massive data sets in near real-time. New AI-native applications are emerging across many fields, combining access to verified, industry-specific documents and data (e.g., SEC’s EDGAR database), fine-tuned models and chat-based interfaces to dramatically accelerate how quickly customers can identify and process information relevant to specific objectives. This trend has perhaps been most evident in the legal field to date with companies like Harvey, EvenUp, Robin AI and Supio*. However, we see the same pattern playing out in healthcare, the public sector, insurance, financial services and education.
AI-native applications give users superhuman capabilities on one or more dimensions related to domain-specific requests. It is not an exaggeration to state that questions that used to require large teams of often junior-level employees (or external consultants) days or weeks to answer, can now be at least partially addressed in minutes via these new services.
Marry Global + Local Knowledge
AI-native applications are uniquely positioned to combine three types of knowledge: global knowledge that is embedded into foundational models via their vast training data, domain-specific knowledge that lives in industry-relevant databases and company-specific insights from an organization’s own artifacts. On the latter point, domain expertise can be reflected in high quality presentations, memos, meeting notes, proprietary research, training materials and historical documents that are used to optimize AI-native application outputs that are consistent with an individual user’s expectations of “what good looks like.”
It’s not just better access to proprietary data, as discussed above. It’s the understanding of how that data reflects the relevant knowledge of an employee, team or organization, within a specific context. This combination unlocks the potential for users to move beyond optimizing individual tasks to automating entire workflows, all while focusing on more specific outcomes. Best-in-class expressions of deep domain understanding through vertical AI applications include:
- Abridge transforms real-time patient audio into precise clinical notes with a multi-LLM architecture trained on a large dataset of medical conversations.
- EliseAI* leverages LLMs to ingest relevant information about a residential building from property management systems (PMSs), CRMs, knowledgebases and directly from leasing professionals, and automate responses to questions from prospective and active tenants.
- Supio* has a proprietary model trained on large datasets of personal injury casework, allowing it to analyze and generate legal documents with high accuracy.
- Magic School provides 80+ AI tools specifically tuned to help educators improve and automate lesson planning, assessment writing, academic content generation/management, and more.
Dynamism
In a recent post titled “Meta’s AI Abundance,” Ben Thompson makes a strong case for Meta’s GenAI opportunity. He specifically highlights the company’s ability to use GenAI to accelerate multi-modal dynamic ad creation and testing, as well as to enable the next generation of personalized content via the company’s new Imagine Yourself model. The potential implications for the digital marketing and commerce industries are clear, and the impacts are likely to be felt sooner than later.
They also speak to what we believe is a larger force as GenAI will drive an evolution in user expectations across several categories of enterprise software. While slightly less universal than the previous three dimensions we have discussed (e.g., you don’t want too much dynamism when dealing with the General Ledger), we expect to see a shift from static to more dynamic application experiences and are watching for examples of companies that can:
Optimize Actions “Under the Hood”/ Optimize Price vs. Performance Tradeoffs in Real-Time
Most companies we speak with have moved beyond testing concepts with a single model to orchestrating sequences of model interactions to optimize outputs for a given use case. The process from input to output has become much more dynamic. Companies are developing their infrastructure with flexibility in mind, allowing them to easily swap modular components in and out to realize performance improvements and/or cost efficiencies. This dynamic has led to the rise of model routers, built by companies like Martian, as a critical new component of the infrastructure stack underpinning AI-native applications. Over time, we expect more advanced AI capabilities, which are currently expressed as user choices (e.g., selecting underlying models in ChatGPT or Perplexity, tone of output, scoring outputs), to become hidden, as the underlying system can adapt to make decisions on a user’s behalf.
Create Generative Customer Journeys
As we mentioned in the Design section, enterprise software has often underdelivered on user experience. While we don’t expect this to change overnight, we are excited about the potential for GenAI to improve the current situation. One way we envision this improvement is through the creation of more dynamic and adaptive content experiences that express a deeper understanding of end-users and customers. Think of sales and marketing collateral tuned to the preferences of individual prospects – from outreach emails, to pitch decks, to landing pages, to contract creation. Think of commerce platforms enabling shoppers to see merchandise in digital twin representations of their own physical spaces or on their own digital avatars.
We already see many examples of this in the market today; however, we can imagine a near-term future in which, as Jeeva* CEO Gaurav Bhattacharya told us, applications “become real-time, continuous learning systems where AI adapts autonomously based on customer interactions.” Longer term, we may see capabilities advance to the point where entire user interfaces are generated in real-time, exposing and hiding underlying capabilities and content as required based on expressions of user intent.
Enable Hyper-Personalization at Multiple Levels
Finally, we see greater opportunities for much more personalized experiences with enterprise software. This will happen on an end-user, team, department and organization-wide level within companies as AI learns relevant preferences, engagement patterns and relationships. For instance, Outreach* builds a custom win model for every team and seller in an organization that is updated in real-time as deals progress. Additionally, s communications and collateral are becoming more finely tuned to individual customers. Over time, we believe agents, with shared memory, will represent the fullest expression of this theme.
Some examples we believe showcase dynamism well include:
- HeyGen offers an AI video creation platform enabling GTM and L&D teams to rapidly generate hyper-personalized video content and deploy fully autonomous, conversational video experiences across sales, support and coaching.
- Mercor built an AI Interviewer that can evaluate candidates in real-time, drawing on pre-processed resume and profile data while adapting to live dialogue.
- Evolv AI continuously adapts UX in real-time through AI-driven experimentation, optimizing customer journeys based on live user behavior.
Distribution
Finally, there is the discussion on how this new AI value is packaged and priced. The question arises: does GenAI create an extinction-level event for the traditional seat-based SaaS model so favored by application companies in the cloud era? As we wrote in our August 2024 market memo, we think the reports of software’s imminent demise are greatly exaggerated. While it is too early to determine whether a dominant model emerges to disrupt the status quo, it is apparent that companies are embracing experimentation as they balance new value and costs while mitigating the potential of new competitive threats. We are closely studying how AI-native applications will:
Increase Pricing and Packaging Flexibility to Maximize Value
We have already entered a much more heterogeneous pricing environment, as we see companies 1) embedding GenAI features into existing services at no additional cost (e.g., Workday), 2) creating premium SKUs of existing products that provide access to GenAI capabilities, 3) delivering entirely new standalone GenAI application, and 4) testing some consumption and outcome-based offerings on top of a base platform commitment.
The “dominant” model of distributing GenAI has not yet been established. It may vary widely by category; however, we see GenAI as technology that expands the ways that companies can deliver value to meet customer needs. The future is likely to include a mix of apps and agents, as well as co-pilots and auto-pilots. Rather than the “either, or” debate around pricing, we believe we will see a mix of seat-based, consumption-based, and, more selectively, outcome-based pricing. Application developers that can balance different models to ensure breadth of customer coverage, while more transparently aligning pricing with value delivered will be best positioned to win going forward.
Enable New Business Models
So much has been written recently about the growth of software-enabled services and the potential rise of agentic systems priced to deliver defined business outcomes that we won’t spend too much additional time on the topics. We would just note that true disruption is never simply a function of the technical capabilities of products, but also often includes a change in business model (e.g., transitioning from licenses to subscription software).
Many companies have already introduced new approaches to pricing that incorporate consumption-based and outcome-based components. Some notable examples include:
- Incubments like Salesforce have announced pricing at a rate of $2 per conversation for its Agentforce suite, and Zendesk now charges $1.5-2 per automated resolution.
- Customer service AI agents like Sierra, MavenAGI, Decagon and Crescendo price based on outcomes (e.g., resolved tickets).
- Reserv provides AI-driven claims processing services and prices based on the volume of opened and executed claims.
- Content generation applications like Synthesia price per minute of video generated, while editing tools like Imagen and Aftershoot price per edit.
The Path Forward
We believe the framework outlined offers a powerful lens for both builders and investors to think about points of AI-enabled differentiation at the application layer. Yet, the real breakthroughs will come from entrepreneurs who blend these dimensions in novel ways, fundamentally reimagining work. Retrofitting existing products with more advanced capabilities will still be necessary and will create significant value, but reinvention will be required to birth companies that can reach, or surpass, today’s enterprise software leaders over the long term. Think single canvas, “always-on” multi-modal applications that collapse capabilities from disparate services into a single experience, priced on a metered basis.
To get from today to that potential future will demand significant improvements to every layer of the tech stack and is implicitly predicated on the assumption that scaling laws hold, at least to some degree, and thus yield continued improvements in frontier model capabilities over the next several years. There is also the hard, though less glamorous, work to wrangle existing models in ways that increase performance, decrease hallucinations, ensure alignment, maintain compliance, improve security and manage costs.
Encouragingly, these are all known challenges. Leaders at several of our portfolio companies assured us that there are years of innovation yet to be realized based on the capabilities of the current generation of models alone, so long as costs continue to decline.
While the precise trajectory of GenAI remains uncertain, it’s reasonable to expect that capabilities don’t remain static. So, if we consider the future through a ‘what could go right?’ lens, what do we envision? How might enterprise software applications transform in the coming years?
So how do we get there?
GPT-5 class models, whenever they come, will alter how people view the trajectory of AI regardless of whether the models under or over-deliver on the hype. Shattered benchmarks and newly emergent capabilities will unleash unbounded optimism, while incremental improvements will likely depress expectations, and valuations, in the near term.
Either way, we will have a clearer view of what will and won’t work, and at what cost, over the next few years. The companies both building and buying GenAI technology will be able to plan accordingly. While they are certainly “talking their own book,” we agree with the view expressed by Sam Altman and Greg Brockman that model capabilities will continue to advance and the companies building AI applications should be operating under that assumption.
However, as we have seen in recent months, we do not necessarily need to see a higher number behind a model name to ignite excitement around AI’s future. When we asked product leaders what they were most excited about in the next few years, they talked about the direction of reasoning research and how that may accelerate the progression from core understanding to deeper thinking to true agentic systems.
Right now, people are focused on the “fast food” version of AI and want an answer immediately. One of our big bets is spinning up systems that, if given enough time, can come up with novel answers to complex questions.
Marc Bellemare
Co-Founder and Chief Science Officer at Reliant
Models taking actions, models taking more time to think, and other modalities “catching up” to the advancements in text will all further expand opportunities for applications to differentiate across dimensions in our framework, almost certainly complemented by new areas of research. This is all leading to a new age of experimentation at the application layer, which we could not be more excited about. Builders’ palettes are expanding on what seems like a weekly basis, and they can now mix product architectures, models, user interfaces, public and private data sources and delivery mechanisms in ways that have not previously existed. The rate of change feels relentless.
That said, the path will be uneven. In many categories, we believe progress toward broad deployment may take longer than anticipated. Many, if not most, experiments at reinventing workflows will fail and rather than disrupting incumbents, AI will likely strengthen current leaders in many categories of software. However, the application and agent companies that demonstrate combinatorial innovation through the rapid integration and expression of new capabilities will position themselves as the Companies of Consequence that will define the coming AI era of software.
At Sapphire Ventures, we remain committed to backing and learning from AI-native application companies. If you are building in the space, we would love to connect with you! Please reach out to cathy@sapphireventures.com, kburke@sapphireventures.com, misty@sapphireventures.com or aditya@sapphireventures.com.
Pamela Wickersham (Head of Product at Supio*), Emrecan Dogan (Head of Product at Glean*), Emily Golden (Head of Growth Marketing at Runway), Vaibhav Nivargi (Founder & CTO at Moveworks*), Tim Smith (Co-Founder & CTO at Medable*), Alon Slutzky (VP of Engineering at Paradox*), Sola Bright (Product Lead at Meta), Zachary Lipton (CTO at Abridge), Kunal Gosar (Co-Founder & CEO at Arcus), Tony Stoyanov (CTO at EliseAI*), Gaurav Bhattacharya (CEO at Jeeva*), Marc Bellemare (Co-Founder & CSO at Reliant AI) and Drew Regitsky (Lead / Founding Engineer at Gem*).
* Asterisks denote Sapphire Ventures portfolio companies.