Implementation of the "Learn No to Say Yes Better" paper.
-
Updated
Nov 2, 2024 - Python
Implementation of the "Learn No to Say Yes Better" paper.
📷 A module for the interacting with the Imgflip API.
Implemenetation of 2016 paper "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" on Flick30k dataset.
Creating stylish social media captions for an Image using Multi Modal Models and Reinforcement Learning
BLIP-ImageCaption
This Python script uses OpenAI's GPT-4-Turbo model to generate image captions and then store captions and file names into a .csv file. It's useful if you need to generate numerous captions for updating alt tags on your website, training machine learning models, etc.
CNN - object detection, classification RNN - natural language processing
This Python script reads image captions from a CSV file, shortens them using OpenAI's API, and then saves them in a JSON file. This could be particularly useful if you need image captions equal to or less than a certain character count for accessibility, social media, or training an image model.
Add a description, image, and links to the image-captions topic page so that developers can more easily learn about it.
To associate your repository with the image-captions topic, visit your repo's landing page and select "manage topics."