After Action: Llama Hackathon ATX 2024

I recently attended an Austin hackathon oriented around the latest open source models from Meta. https://lu.ma/atx-llama-hackathon.

As the main sponsor, representatives from Meta are spreading the word about their impact grant program. https://www.llama.com/llama-impact-grants/

I got to work with some great tools from these additional sponsors:

Pflow-prompt

During this hackathon, I decided to try to test the ability for for Llama to generate petri-nets using the pflow.xyz notation.

I’ve been interested in trying to use LLMs to convert code into equivalent petri-net models. So, I started with a program written in a state-machine style in BASH.

I then provided it as user input, alongside a system prompt:

Output models:

Remarks

I’m pretty impressed with how well OpenAI 4o performed. The LLM correctly intuited that we’d like to use the state and action names as labels.

It also seemed to do a good job laying out the objects, and using the playground feature, I was even able to prompt another time to ask it to “add more space between the elements” – with consistent results.

Most of the other submissions made use of natural language translation and auto-classification.

Upon reflection: I could have applied the RAG approach using tools provided by Datastax, or built a tool using lang-graph, or even tried model refinement to get the results I wanted from Llama.

In the end, because OpenAI 4o worked with a single prompt I didn’t see any benefit to try to use Llama.

While developing this experiment I used the API https://platform.openai.com/ and only spent $.06 !!

Conclusion:

For now, I’ll be using 4o as I add LLM support to pflow.xyz.

Going forward, my sense is that various refinement approaches will become less relevant as better LLMs are developed, and for now the cost seems to be right for this application.