Querying OpenAI GPT Models¶
What’s in this document?
A Generative Pretrained Transformer (GPT) is a type of Large Language Model (LLM) that supports natural language querying of a model that is typically trained on a very large collection of data. This makes it possible for queries to address a wide range of topics.
GPT models from OpenAI have become popular through the use of their ChatGPT interface, which lets users type in questions and see answers in response to those questions. GraphDB provides a set of magic predicates, implemented as extension functions, that let your SPARQL queries communicate with OpenAI GPT models, letting you combine the power of these models with your own knowledge graphs.
Configuring Your Use of GPT Models¶
The following settings in your conf/graphdb.properties
file (or on the startup command line, as described in the Configuration section) can customize how your copy of GraphDB uses the OpenAI GPT models. Except for the graphdb.gpt.token
setting, all of these settings are either optional or have a default setting.
Note
You must obtain an appropriate API key and set the
graphdb.gpt.token
value before you can use the GPT functions listed
below in your SPARQL queries.
Use of the OpenAI API requires API credits, which can be purchased on the OpenAI website. Free accounts are available, and may include API credits depending on the promotions currently available from OpenAI. Free credits may expire if they go unused.
graphdb.gpt.token: The authentication token for the OpenAI API. To obtain an authentication token, first create an account by clicking Sign up on the OpenAI home page. Then, you can create a key on their API Keys page.
graphdb.gpt.model: The OpenAI model to use. The default value is
gpt-3.5-turbo
. The model must support the OpenAI Completions API . The available models depend on your OpenAI account; the integration requires one of the more recent models whose API URL contains/chat/completions
(currently, thegpt-xxx
models) and not one of the older models whose API URL has/completions
without/chat/
.graphdb.gpt.timeout: The maximum time that GraphDB will wait for OpenAI to provide a response, in seconds. The default value is 90.
graphdb.gpt.url: The OpenAI chat completions API endpoint. The default is
https://api.openai.com/v1/chat/completions
and corresponds to OpenAI’s main API. This setting can be used to connect to another compatible provider such as Azure OpenAI. An Azure OpenAI endpoint URL will follow this model:https://<some-id>.openai.azure.com/openai/deployments/<another-id>/chat/completions?api-version=yyyy-mm-dd
graphdb.gpt.auth: The authentication method to use for the chat completions endpoint. Optional. The default value is
bearer
. The possible values are:bearer: Sends the token via the HTTP header “Authorization: Bearer <token>”.
api-key: Sends the token via the HTTP header “api-key: <token>”. (Use this for Azure OpenAI.)
custom: Sends a token that must consist of a header and value separated by a colon (for example,
my-header:my-auth-value
). GraphDB will send it as the HTTP header “my-header: my-auth-value”.none: No authentication headers will be sent.
GPT Functions¶
The GPT chat functions send your queries to the OpenAI GPT model that you have set in your configuration, as described below, and return the answer as part of your SPARQL query results.
These functions are implemented as magic predicates. These look like triple patterns, but arguments are passed in the object position and the result is bound to the variable or parenthesized variable list in the subject position.
In the syntax examples below, the gpt:
prefix represents the URI http://www.ontotext.com/gpt/
.
Setting the temperature of the response¶
The last argument passed to gpt:ask
, gpt:list
, or gpt:table
can be a real number between 0 and 2 that sets the temperature of the response. Higher values such as 1.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. The default value is 1.
gpt:ask()
— Retrieve a single answer¶
The gpt:ask()
function passes one or more messages with instructions to the OpenAI GPT. The results (unlike with gpt:list) are stored in a single binding. The last message passed can be a real number between 0 and 2 to set the temperature of the response.
Because gpt:ask()
is a magic predicate, you call it as a triple pattern with the variable in which to store the response as the subject and the messages to pass (enclosed in parentheses if there are more than one) as the object:
?answer gpt:ask ?message
?answer gpt:ask (?message1 ?message2 ...)
The following SPARQL query passes two messages in the triple pattern’s object position:
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT * WHERE {
?primes gpt:ask ("List three prime numbers."
"Make them between 100 and 200.")
}
Note how the result of this query includes three numbers, but only one value is actually returned: a single quoted string with those numbers:
If the execution of your query binds a variable multiple times because of the use of another data source, GraphDB will call gpt:ask
for each one. This provides an excellent way for a query to combine information from your own knowledge graph with an OpenAI GPT.
The following example lists the types of grape stored in a wine dataset
. It then creates a query with each of those names to send to the GPT about what brand of wine uses that grape:
PREFIX wine: <http://www.ontotext.com/example/wine#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT ?brands WHERE {
?grape a wine:Grape ;
rdfs:label ?grapeName .
BIND(CONCAT("What brand of wine uses the ", ?grapeName, " grape?") AS ?query)
?brands gpt:ask ?query .
}
The following shows the first three rows of the result:
Queries may use punctuation for more structure, but there is no specific syntax to follow. All messages are treated as text. For example, the following query lists National Basketball Association players from Australia who have played in the guard position:
prefix gpt: <http://www.ontotext.com/gpt/>
SELECT * WHERE {
?player gpt:ask ("List NBA players. nationality: Australian. position: guard." )
}
gpt:list()
— Retrieve a list of answers¶
The gpt:list()
function passes one or more messages with instructions to the OpenAI GPT. These instructions should name a specific number of results that you would like. The results (unlike with gpt:ask) can then be returned as multiple bindings of the specified variable, which will be displayed as separate result set rows. The last message passed can be a real number between 0 and 2 to set the temperature of the response.
Because gpt:list()
is a magic predicate, you call it as a triple pattern with the variable in which to store the response as the subject and the messages to pass (enclosed in parentheses if there are more than one) as the object:
?answer gpt:list ?message
?answer gpt:list (?message1 ?message2 ...)
The following example SPARQL query is very similar to the first example shown in the description of the gpt:ask() magic predicate, but it calls gpt:list()
instead:
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT * WHERE {
?primes gpt:list ("List three prime numbers."
"Make them between 100 and 200.")
}
Note how the result set has four rows, unlike the gpt:ask()
version of the query—one for an introduction and one for each prime number returned:
Tip
The result may or may not include an introduction like the “Here are three…” row above. You can prevent this with more specific instructions passed to the function as additional messages such as “Return only the numbers as a markdown list” or even just “Format: markdown list” (or “Format: HTML list”).
Without naming a specific list format, you can specify that you do not want the results as a CSV list on a single line by adding a message such as “Return one number per line.”
gpt:table()
— Retrieve a table of answers¶
This function sends one or more requests to create multiple bindings with multiple values—in other words, a table. The last message passed can be a real number between 0 and 2 to set the temperature of the response.
Because gpt:table
is a magic predicate, you call it as a triple pattern with the variable(s) in which to store the response as the subject and the messages to pass as the object. The subject can list more than one variable to store the values of the table columns. If you have more than one subject variable, enclose the list in parentheses. Similarly, if you pass more than one message, enclose the list in parentheses. The following shows the potential forms:
?column1 gpt:table ?message
?column1 gpt:table (?message1 ?message2 ...)
(?column1 ?column2) gpt:table ?message
(?column1 ?column2 ?column3) gpt:table (?message1 ?message2 ...)
The following SPARQL query shows an example:
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT * WHERE {
(?name ?birthday ?instrument) gpt:table ("List the Beatles, their birthdays, and the instrument that each played." )
}
The following shows the result:
In addition to having gpt:table
create a new table for you, you can use the helper functions described at List Manipulation Extension Functions with a VALUES
clause to pass an incomplete table and ask the GPT model to fill in the missing values. Note how the three undef
values in this next query’s VALUES
table get replaced by appropriate descriptions in the result:
PREFIX helper: <http://www.ontotext.com/helper/>
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT * WHERE {
{
select (helper:tupleAggr(?row) as ?table) {
values (?item ?description) {
("banana" "A banana is a curved, yellow fruit with a sweet taste and creamy texture.")
("strawberry" "A strawberry is a small, sweet, red fruit with a juicy texture and a white center.")
("pineapple" undef)
("egg" undef)
("mulberry" undef)
}
bind(helper:tuple(?item, ?description) as ?row)
}
}
(?item ?description) gpt:table ("complete the missing columns" ?table)
}
List Manipulation Extension Functions¶
The GraphDB extensions to the SPARQL standard that are described in this section make it easier to assemble several pieces of data into a single structure for easier passing to GPT Functions. (These extension functions are not related to the Jena list function extensions or to the collections predicates that can be used to create and manipulate RDF lists.)
helper:tuple()
— Combine values into a list¶
This function combines all of its arguments into an internal list that you can reference using a blank node connected to the list members. You can access individual members of the list using the helper:iterate()
function described further in helper:iterate() — Iterate through an internal list.
In the example below, the helper:tuple()
function combines the two strings “foo” and “bar” and the IRI ex:baz
into a list stored with the variable ?tuple
.
PREFIX helper: <http://www.ontotext.com/helper/>
PREFIX ex: <http://www.example.com/>
SELECT ?tuple ?listMember WHERE {
BIND(helper:tuple("foo", "bar", ex:baz) as ?tuple)
?listMember helper:iterate ?tuple .
}
In the result, we see that the helper:iterate
magic predicate lists the
?tuple
members next to the identifier for the list’s blank node:
You can also call helper:tuple
as a magic predicate. The following query does the same thing as the previous one, but using this function as a magic predicate in a triple pattern:
prefix helper: <http://www.ontotext.com/helper/>
PREFIX ex: <http://www.example.com/>
SELECT ?tuple ?element WHERE {
?tuple helper:tuple ("foo" "bar" ex:baz) .
?element helper:iterate ?tuple
}
helper:tupleAggr()
— Aggregate values into a list¶
The helper:tupleAggr()
function is similar to the helper:tuple function but only takes one argument. If the argument is bound to multiple
values, it will aggregate those values into a list that can be accessed by a
blank node connected to the list members. You can access individual members of the list using the helper:iterate()
function described further in helper:iterate() — Iterate through an internal list.
Note
Remember that helper:tupleAggr()
is a SPARQL aggregate function and cannot be called inside of a
WHERE
clause like the functions that you might use with a
FILTER
clause.
In the example below, the helper:tupleAggr()
function combines the values of all of the unique ?grapeName
values from the wine dataset
into a list stored in the variable ?tuple
:
PREFIX wine: <http://www.ontotext.com/example/wine#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX helper: <http://www.ontotext.com/helper/>
SELECT ?tuple ?listMember WHERE {
?listMember helper:iterate ?tuple
{
SELECT (helper:tupleAggr(DISTINCT ?grapeName) as ?tuple) {
?grape a wine:Grape ;
rdfs:label ?grapeName .
}
}
}
In the result, we see that the helper:iterate
magic predicate lists the ?tuple
members next to the identifier for the list’s blank node:
helper:rdf()
— Combine values into an RDF triple¶
The helper:rdf()
function takes a subject, predicate, and object and combines them into an internal triple connected by a blank node that can be passed to the helper:serializeRDF function. You can provide the triple components directly or combine them using the helper:tuple function. If you do use the helper:tuple
function, you may also add a reference to a named graph to store the triple as a quad as shown in the third form below:
helper:rdf(<< ?s ?p ?o >>)
helper:rdf(helper:tuple(?s, ?p, ?o))
helper:rdf(helper:tuple(?s, ?p, ?o, ?g))
One use case where this is useful is to query your own graph for the data that you are interested in and then pass that data to a GPT model with a natural language query about that data. The following query does this with these three steps:
Select the name and height values of the humans in a
Star Wars dataset
.Use
helper:rdf
to bind these triples to the?rdf
variable.Pass the
?rdf
data with a natural language query about the height of certain characters to the GPT model using the gpt:ask() function:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX voc: <https://swapi.co/vocabulary/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX helper: <http://www.ontotext.com/helper/>
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT ?answer WHERE {
{
SELECT (helper:rdf(helper:tuple(?s, ?p, ?o)) AS ?rdf) {
?s a voc:Human ;
?p ?o .
FILTER(?p in (rdfs:label, voc:height))
}
}
?answer gpt:ask ("who is taller than 190cm or shorter than 170cm in this RDF data?" ?rdf)
}
You can use the helper:serializeRDF extension function to learn more about what the helper:rdf
function is storing as you refine your query.
helper:serializeRDF()
— Convert internal RDF to readable triples¶
This function serializes RDF that has been stored internally with a function such as helper:rdf so that you can see the contents of the triples. The default format is Turtle, but you can request a different one by naming the MIME type as an optional second argument.
helper:serializeRDF(?rdf)
helper:serializeRDF(?rdf, "application/rdf+xml")
The following query is a modified version of the one shown in helper:rdf() — Combine values into an RDF triple. For this one, the helper:serializeRDF()
function also stores a converted version of ?rdf
in the ?rdfSer
variable, and the outer SELECT
statement requests the value of both with the answer.
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX voc: <https://swapi.co/vocabulary/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX helper: <http://www.ontotext.com/helper/>
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT ?rdf ?rdfSer ?answer WHERE {
{
SELECT (helper:rdf(helper:tuple(?s, ?p, ?o)) AS ?rdf) WHERE {
?s a voc:Human ;
?p ?o .
FILTER(?p in (rdfs:label, voc:height))
}
}
BIND(helper:serializeRDF(?rdf) as ?rdfSer)
?answer gpt:ask ("who is taller than 190cm or shorter than 170cm in this RDF data?" ?rdf)
}
In the result (of which only the beginning is shown below, because many more triples were serialized) we see that ?rdf
is the value of the blank node tying the triple components together and ?rdfSer
shows the Turtle-star serialization of the RDF.
helper:iterate()
— Iterate through an internal list¶
This magic predicate iterates over the elements of an internal list created by
the helper:tuple
, helper:tupleAggr
, or helper:rdf
functions. See
the sections helper:tuple() — Combine values into a list and helper:tupleAggr() — Aggregate values into a list for two examples.
GPT Query Explanations¶
GraphDB’s Query Profiling with the Explain Plan feature lets you gather data about how a given query will execute so that you can explore ways to improve it. The GPT Explain feature gives you a similar way to learn more about what the query does and what the result represents.
The following query asks for the name of each Academy Award that the “Star Wars” movie won and how many people shared the award. If you run this query in the Workbench by holding down the Alt key when you click the Workbench’s Run button, the result will include an additional __gpt
column that provides information about both the query and the result, as shown below.
SELECT ?awardName (COUNT(?person) AS ?teamCount)
WHERE {
?award a voc:AwardRecognition ;
voc:awardStatus "awarded" ;
voc:award ?awardType ;
voc:person ?person ;
voc:forWork "Star Wars"@en .
?awardType rdfs:label ?awardName .
FILTER ( lang(?awardName) = "en" )
}
GROUP BY ?awardName
ORDER BY DESC(?teamCount)
For a CONSTRUCT
or DESCRIBE
query, doing this returns an extra triple with the output in the object: onto:gpt onto:gpt "output"
.
Note
In addition to holding down the Alt key when you click the Run button, there are two other ways to add the __gpt
column that describes the query and its result:
By pressing Alt+[Ctrl/Cmd]+Enter with your cursor in the query edit field.
By using
FROM <http://www.ontotext.com/gpt>
to add this special named graph to the query. This is useful when using the RDF4J Java client or the RDF4J REST API to send a query to GraphDB or to improve query reproducibility–for example, by using a Saved Query in GraphDB Workbench, sharing a query by email, or as a link.
Customizing the explanation output¶
Specialized strings added to your query as comments give you greater control over the output of a GPT query explanation.
:gpt: <additional instruction>
– Follow this additional instruction about how to provide the result. See example below.:gpt-query-only: [optional instruction]
– Provide an explanation of the query but not the result. This can include an optional instruction to do something else with the query instead of explaining it.:gpt-result-only: [optional instruction]
– Provide an explanation of the result but not the query. This can include an optional instruction to do something else with the result instead of explaining it.:gpt-no-eval:
– Do not include the query result in the output.
The following query includes a :gpt:
comment between the query’s SELECT
and WHERE
clauses (remember to execute it using the Alt key when you click the Run button or one of the alternatives to this described above):
PREFIX gpt: <http://www.ontotext.com/gpt/>
SELECT *
# :gpt: Respond in Shakespearean language
WHERE {
?primes gpt:ask "List three prime numbers that are greater than 100."
}
The result’s __gpt
explanation uses vocabulary and grammar similar to that of William Shakespeare:
Certain combinations of these instructions can work together. For example, the comments in the following DESCRIBE query (using the Star Wars dataset
) tell GraphDB to ask the GPT model to write a song, and to not show the results of the query:
# :gpt-result-only: Write a short pop song about this data
# :gpt-no-eval:
DESCRIBE <https://swapi.co/resource/planet/1> <https://swapi.co/resource/human/1>
Setting the temperature of the explanation¶
A comment with :gpt-temp: <number>
will set the temperature of the explanation. As with setting the temperature of a query result, this is a value between 0 and 2, with higher values requesting more randomness in the output. 1 is the default.