Oak HC/FT Hosts AI Workshop

September 24, 2024

Written by

On September 17, Oak HC/FT hosted a gathering of technical leadership from across our healthcare and fintech portfolio companies to discuss best practices in AI integration and building.

The event featured talks, demos, and case studies, all aimed at deepening our portfolio companies' understanding of AI and its potential impact on their businesses. Participants gained practical insights and strategies from industry leaders like Microsoft, OpenAI, and Oscar.

Rob Ferguson on the Evolving Generative AI Landscape

Rob Ferguson, Head of AI Microsoft for Startups, opened the AI Workshop with a discussion on the current state of play for GenAI and what we should expect going forward. Rob breaks the next decade of AI into three “horizons”: 1) custom copilots, 2) GenAI native applications, and 3) GenAI systems.

Today, in the world of custom copilots, companies are able to harness the capabilities of foundational models and integrate them into existing workflows. With custom copilots, Rob has seen the most success in companies that are adapting an existing workflow to solve a long-standing problem, sharing, “in places where there is less tolerance, meaning you must have precise answers, copilots tend to not work well.”

As more AI companies explore the capabilities of GenAI models, more GenAI native applications will emerge. “These applications deliver on their promises by leveraging contextual awareness of your data and users, all while balancing the trade-offs of costs and abilities,” Rob explained.

The third horizon is the advent of GenAI systems. These are not just agents or workflows, but “agentic AI that works in tandem to scale value by adjusting the context between individual agents and adapting to a plan that can manage costs and the safety of memory in a trusted way.”

Sohow do we get from custom copilots to GenAI native applications? There isn’t just one way to build a great GenAI application, Rob said. “You’ll become GenAInative when you understand users, context, and data, and how they work with the models you selected.”

Mario Schlosser on Leveraging AI to Improve Healthcare      

Mario Schlosser, Co-Founder & President of Technology at Oscar Health, joined us for a candid conversation and demo on how the tech-driven health insurer is integrating AI into its product and business operations.

At Oscar, Mario is leveraging AI to generate value across three areas of healthcare: 1)alleviating the burden of manual and administrative operations (e.g., contracting, utilization management and claims review), 2) creating a more personalized and differentiated experience for members, and 3) providing more cost-effective and faster clinical care without sacrificing quality, via Oscar Medical Group.  

During Mario’s demo, he showed how the team is leveraging AI to automate as much of the system as possible. The crux of his demo was focused on how Oscar is leveraging AI to speed up prior authorization. In healthcare, you often have to operate with a stale rule base, stale clinical information, and unstructured data from disparate resources in this process. In utilization management, this becomes very challenging – and time consuming – if a human needs to sort through volumes of medical records and then cross-reference them with countless coverage rules to make a decision.

Oscar has partnered with OpenAI to build AI into its utilization management decision-making tree process on first 4o and now o1. Initial iterations of the tooling helped matchrules to medical records in a fairly accurate way that cut down nurse review time– but lacked reliability and still required significant human review. With thenew OpenAI model, the recommendations look promising. While still being tested, the models are now matching rule requirements in a much more meaningful way – cutting down nurse review time even more.  

Alexander Statnikov and Craig Kelly on Optimizing Customer Support in Financial Services with AI

Crosswise Co-Founders Alexander Statnikov and Craig Kelly presented a case study on using AI to automate customer support in the financial services industry. They walked through how they explored various strategies for AI enablement, weighing the pros and cons of each option:

  • Buy: Deploying third-party solutions is a non-trivial task (integration, multiple tooling, interfaces, etc.) but can significantly accelerate time-to-market, often at a fraction of internal development cost.
  • Partner: AI model providers are eager to partner.
  • Build: Building in-house can be advantageous based on unfavorable vendor pricing (even when accounting for engineering support and maintenance costs), an inability to meet customer requirements, and regulatory compliance or intellectual property considerations.

Kyle Langworthy and Chett Garcia on Competing for AI Talent

Riviera Partners’ Kyle Langworthy and Chett Garcia spoke to the state of the current AItalent market, noting that AI searches are weighted toward VC and PE-backed companies irrespective of sector or even domain within companies.

They also spoke to the pressing issue of AI talent compensation. While high compensation packages for AI executives are making headlines, Kyle and Chett said these cases are exceptions rather than the norm, and that the overall compensation range remains relatively tight. For venture-backed companies, salaries are still closely linked to the company's stage and the amount of funding raised.

To effectively attract and retain AI talent in today’s competitive landscape, Kyle and Chet highlighted several effective strategies, including offering flexible work arrangements, showcasing the meaningful impact employees can have, and ensuring team members have a “seat at the table” in decision-making. They also noted open-source technology and proprietary offline data sets as draws for AI talent.  

OpenAI o1 Demo

Just days before our AI workshop, OpenAI launched a preview of OpenAI o1, a new series of AI models designed to reason through complex tasks and solve harder problems than previous models. OpenAI solutions architects joined us to demo o1 and field questions from our portfolio on OpenAI’s product roadmap.

Oak HC/FT Partner Vig Chandramouli shared his thoughts on o1 following the demo:

  • GPT-4o is still the right model for a variety of tasks. It can take multi-modal inputs, is up to 30x faster, and is 3-4x cheaper per million input and output tokens. But 4o cannot reason the way o1 can, and that has some real benefits for complex problem solving.
  • With o1, you can:
    • Feed the model a complex set of guidelines and have it compare them against a set of information to arrive at a conclusion.
    • Get the model to come up with a plan of action, which can then be executed by GPTs with a clear focus (i.e., it can prompt better than most of us and get the best of the 4omodels).
    • By virtue of being in preview, o1 is still not fully enterprise ready as with 4o. OpenAI is giving access to let the community familiarize themselves with its capabilitiesahead of any full release.
    • Note, this is NOT their new frontier model.
  • Interestingly, o1 often thinks for 10-20 seconds before providing a detailed response. It’s possible that it’s running paralleland sequential queries via 4o and then picking the “best” one.
  • OpenAI mentioned that they are working on a tool to auto-select the right model for the right task, and that should make this part of the trial/error stack much easier.
  • O1-mini is an equally powerful but focused model. It’s very good at writing code and can materially improve code performance via a vis 4o. That said, today, the best use cases are still larger refactor jobs vs. day-to-day work.

Portfolio Company Workshops  

To close out the day, Oak HC/FT investors facilitated small group workshops with our portfolio companies, with the goal of speeding learning across the portfolio through case studies and example sharing between companies.

Oak HC/FT Principal Oivind Lorentzen on the fintech breakouts:

“On the financial services side, companies have seen the most success in developing their own models for specific use cases. This is the result of strict data privacy policies from customers, which inhibits them from using LLMs because they can't run them in fully closed environments. Companies are interested in tools that help manage these data privacy concerns and controls. Internal use cases are where the most opportunity is right now across the board for LLMs – leveraging co-pilots to support coding and automating manual processes like completing RFPs, questionnaires and other document ingestion or creation is becoming a game changer for productivity.”

Oak HC/FT Vice President Charlotte Black on the healthcare breakouts:

“AI has rapidly accelerated automation capabilities to remove administrative waste and processes in healthcare. We have been in walk-run mode for the past few decades with OCR, NLP, RPA, and machine learning. The advent of AI is creating that next innovation thrust. With foundational models advancing so rapidly, our companies are focused less on fine-tuning models and more on using AI within their purpose-built applications. This has manifested in companies using different models to support product needs (making decisions around cost, level of healthcare compliance, speed, and innovation), but also building in a way that your product has an application moat as models continue to accelerate (e.g., one year ago ChatGPT wasn't deterministic, now it is).”