David Talby, PhD

David Talby, PhD

Chief Technology Officer at John Snow Labs

    David Talby is the Chief Technology Officer at John Snow Labs, helping companies apply artificial intelligence to solve real-world problems in healthcare and life science. David is the creator of Spark NLP – the world’s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK. David holds a Ph.D. in Computer Science and Master’s degrees in both Computer Science and Business Administration. He was named USA CTO of the Year by the Global 100 Awards in 2022 and Game Changers Awards in 2023.

    All Sessions by David Talby, PhD

    Day 3 04/25/2024
    11:30 am - 12:00 pm

    Applying Responsible Generative AI in Healthcare

    <span class="etn-schedule-location"> <span class="firstfocus">NLP</span> </span>

    The past year has been filled with frameworks, tools, libraries, and services that aim to simplify and accelerate the development of Generative AI applications. However, a lot of them do not work in practice, on real use cases and dataset. This session surveys lessons learned from real-world projects in healthcare that created a compelling POC and only then uncovered major gaps from what a production-grade system will require: 1. Fragility and sensitivity of current LLMs in minor changes to both datasets and prompts and their accuracy impact. 2. Where guardrails and prompt engineering fall short in addressing critical bias, sycophancy, and stereotype risks. 3. The vulnerability of current LLM’s to known medical cognitive biases such as anchoring, ordering, and attention bias. This session is intended for practitioners who are building Generative AI systems in Healthcare and need to be aware of the legal and reputation risks involved and what can be done to mitigate them.

    Day 3 04/25/2024
    11:30 am - 12:00 pm

    Applying Responsible Generative AI in Healthcare

    <span class="etn-schedule-location"> <span class="firstfocus">NLP</span> </span>

    The past year has been filled with frameworks, tools, libraries, and services that aim to simplify and accelerate the development of Generative AI applications. However, a lot of them do not work in practice, on real use cases and dataset. This session surveys lessons learned from real-world projects in healthcare that created a compelling POC and only then uncovered major gaps from what a production-grade system will require: 1. Fragility and sensitivity of current LLMs in minor changes to both datasets and prompts and their accuracy impact. 2. Where guardrails and prompt engineering fall short in addressing critical bias, sycophancy, and stereotype risks. 3. The vulnerability of current LLM’s to known medical cognitive biases such as anchoring, ordering, and attention bias. This session is intended for practitioners who are building Generative AI systems in Healthcare and need to be aware of the legal and reputation risks involved and what can be done to mitigate them.

    Open Data Science

     

     

     

    Open Data Science
    One Broadway
    Cambridge, MA 02142
    info@odsc.com

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Youtube
    Consent to display content from - Youtube
    Vimeo
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google