Jay Rodge

Jay Rodge is a developer advocate for large language models (LLMs), where he demonstrates how developers can leverage GPU acceleration in their LLM processes, using tools and frameworks that are widely used by the developer community. Previously, Jay was a product marketing manager for data science and deep learning products at NVIDIA, driving launches and product marketing initiatives. Jay received his master’s degree in computer science from Illinois Tech, Chicago. Before joining NVIDIA, Jay was an AI research intern at BMW Group, solving problems using computer vision for BMW’s largest manufacturing plant.
Avatar photo

Posts by Jay Rodge

GIF shows chat app in use.
Generative AI

Creating RAG-Based Question-and-Answer LLM Workflows at NVIDIA

The rapid development of solutions using retrieval augmented generation (RAG) for question-and-answer LLM workflows has led to new types of system... 11 MIN READ
Generative AI

Deploying Accelerated Llama 3.2 from the Edge to the Cloud

Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an... 6 MIN READ
Generative AI

Generative AI Agents Developer Contest: Top Tips for Getting Started

Join our contest that runs through June 17 and showcase your innovation using cutting-edge generative AI-powered applications using NVIDIA and LangChain... 3 MIN READ
Decorative image of a computer screen against a purple background, with a dial on the side.
Data Science

RAPIDS cuDF Accelerates pandas Nearly 150x with Zero Code Changes

At NVIDIA GTC 2024, it was announced that RAPIDS cuDF can now bring GPU acceleration to 9.5M million pandas users without requiring them to change their code. ... 5 MIN READ
Decorative image.
Data Science

Accelerated Data Analytics: Machine Learning with GPU-Accelerated Pandas and Scikit-learn

If you are looking to take your machine learning (ML) projects to new levels of speed and scalability, GPU-accelerated data analytics can help you deliver... 14 MIN READ
Data Science

Optimizing and Serving Models with NVIDIA TensorRT and NVIDIA Triton

Imagine that you have trained your model with PyTorch, TensorFlow, or the framework of your choice, are satisfied with its accuracy, and are considering... 11 MIN READ