How to Automate Data Classification in PostgreSQL With OpenAI
In case you missed the pgai Vectorizer launch, learn why vector databases are the wrong abstraction and how you can create AI embeddings right within your PostgreSQL database.
Businesses are inundated with data from various sources, including customer interactions, transactions, support queries, product reviews, and more. This makes data classification a crucial task. However, classifying unstructured data, such as customer reviews and support interactions, has always been challenging. The emergence of large language models (LLMs) is simplifying this process.
In this tutorial, we’ll explore how to use the open-source pgai and pgvector extensions to automate data classification directly within PostgreSQL. This approach is particularly helpful if you already have data within PostgreSQL or want to build classification systems without relying on additional vector databases or frameworks.
- Structured vs. Semi-Structured vs. Unstructured Data in PostgreSQL
- Parsing All the Data With Open-Source Tools: Unstructured and Pgai
Automating Data Classification in PostgreSQL: Tools
Let’s first take a quick look at pgvector and pgai, the two open-source extensions we’ll use with PostgreSQL. We’ll also look at how OpenAI models can help with the process.
Pgvector: making PostgreSQL a vector database
Pgvector is a powerful open-source PostgreSQL extension that brings vector-handling capabilities to the database and allows you to store, query, and manage high-dimensional vectors directly within your tables. It is useful for building semantic search, recommendation systems, and data classification algorithms using PostgreSQL.
Introduction to OpenAI models
OpenAI offers a range of advanced language models that are updated as the technology advances. The flagship models, GPT-4o and GPT-4o Mini, are the latest ones at the time of writing this article. These models are multimodal, capable of processing text and image inputs while producing text outputs, and have been architected to tackle complex, multi-step tasks with high accuracy and speed.
GPT-4o, the most advanced model in the lineup, features a context window of up to 128,000 tokens, allowing it to maintain extensive contextual awareness across long conversations or documents. This model is faster and more cost-effective than previous iterations, such as GPT-4 Turbo.
For developers needing a lighter solution, the GPT-4o Mini is a smaller model and OpenAI’s cheapest model. It has higher intelligence than GPT-3.5-Turbo but is faster and cheaper, and it is meant for lightweight tasks.
The previous generation models, GPT-4 Turbo and GPT-3.5 Turbo, remain available. GPT-4 Turbo also includes vision capabilities, supporting JSON mode and function calling for tasks involving text and images.
What is pgai?
Pgai is an open-source extension for PostgreSQL that brings AI-powered capabilities directly to your database. You can use pgai to interact with machine learning models and build AI workflows within PostgreSQL, enabling you to create AI-powered systems without ever leaving your database environment.
The true power of pgai emerges when it is used in conjunction with pgvector and OpenAI. You can use pgai to harness the vector data stored in PostgreSQL via pgvector and call OpenAI methods to classify this data automatically. This combination allows you to build a fully automated data classification pipeline within PostgreSQL.
Setting Up
First, you’ll need a working installation of PostgreSQL with the pgvector and pgai extensions. You can either install them manually or use a pre-built Docker container. Alternatively, you can simply use Timescale Cloud for a free PostgreSQL cloud instance, pre-installed with pgai and pgvector.
Log in or create an account on Timescale Cloud, pick your service type, region, and compute, and then click on ‘Create a service’.
Once your service is created, you’ll receive the connection string, username, password, database name, and port. You can also download the database config.
Let’s save the PostgreSQL database connection string as an environment variable.
You will also need OpenAI API keys. Head to platform.openai.com to get one. Once you have it, save it in an environment variable as well.
Now you can use psql
to connect in the following way:
You can activate the pgai and pgvector extensions:
Let's check if the pgai and pgvector extensions are enabled in the PostgreSQL database by running the following command:
tsdb=> \dx
List of installed extensions
Name | Version | Schema | Description
---------------------+---------+------------+---------------------------------------------------------------------------------------
ai | 0.4.0 | ai | helper functions for ai workflows
pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language
timescaledb | 2.17.2 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)
timescaledb_toolkit | 1.18.0 | public | Library of analytical hyperfunctions, time-series pipelining, and other SQL utilities
vector | 0.8.0 | public | vector data type and ivfflat and hnsw access methods
(7 rows)
Let’s also check if the OpenAI function calls are working. You can test this by:
This code should return the list of all OpenAI models available:
id | created | owned_by
-----------------------------+------------------------+-----------------
chatgpt-4o-latest | 2024-08-13 02:12:11+00 | system
gpt-4o-2024-08-06 | 2024-08-04 23:38:39+00 | system
gpt-4o-mini | 2024-07-16 23:32:21+00 | system
gpt-4o-mini-2024-07-18 | 2024-07-16 23:31:57+00 | system
gpt-4o-2024-05-13 | 2024-05-10 19:08:52+00 | system
gpt-4o | 2024-05-10 18:50:49+00 | system
gpt-4-turbo-2024-04-09 | 2024-04-08 18:41:17+00 | system
gpt-4-turbo | 2024-04-05 23:57:21+00 | system
gpt-3.5-turbo-0125 | 2024-01-23 22:19:18+00 | system
gpt-4-turbo-preview | 2024-01-23 19:22:57+00 | system
gpt-4-0125-preview | 2024-01-23 19:20:12+00 | system
…
We can now proceed with the tutorial steps.
Performing Data Classification in PostgreSQL Using Pgai
In this tutorial, we will start with a list of product reviews. We will then use the pgai and pgvector extensions to classify the reviews into positive, negative, or neutral categories using the OpenAI API. You can use a similar approach to perform any other data classification task.
Let’s first create a product_reviews
table with some sample data.
Create product_reviews
table
The following SQL command creates a table named product_reviews
to store customer reviews of a product. The table includes columns for customer ID, date of review, product name, a short review, and a detailed review.
Let's confirm that the product_reviews
table has been created successfully by running the following command:
\dt
Output:
List of relations
Schema | Name | Type | Owner
--------+-----------------+-------+----------
public | product_reviews | table | postgres
(1 row)
Insert sample product reviews
Let’s insert sample reviews for products like laptops, phones, and tablets.
Create product_reviews_classification
table
Next, we'll create the product_reviews_classification
table to store the data classification results, including customer ID and review type. You can use the SQL command below to create the table.
Classify product reviews and insert them into the product_reviews_classification
table
To classify the product reviews under positive, negative, or neutral categories, we will use the OpenAI API. We will use the openai_chat_complete
function in SQL provided by the pgai extension to perform the data classification task.
In the SQL command, we will perform three key steps.
Step 1: Format reviews with a structured template
In the first step, we’ll format the raw reviews into a structured text format. We can do this by using the format
function in SQL, which combines the short_review
and review
fields into a consistent template.
Step 2: Classify reviews using OpenAI and categorize the results
Next, we’ll take the formatted reviews and call OpenAI's API to classify them as either positive
, negative
, or neutral
. If the classification is one of the three expected categories, we will keep it; otherwise, we will default the review to neutral
.
Step 3: Insert classified reviews into the product_reviews_classification
table
Finally, we’ll insert the classified review data into the table product_reviews_classification
.
Here’s the three-step SQL command that performs the classification task.
In the openai_chat_complete
function, we’ll use the ‘gpt-4o’ model.
First, let's confirm if the product reviews have been classified and inserted into the product_reviews_classification
table by running the following command:
Output:
customer_id | review_type
-------------+-------------
4 | neutral
10 | negative
6 | neutral
2 | neutral
9 | neutral
3 | negative
5 | positive
7 | negative
1 | positive
8 | positive
(10 rows)
We can verify our results by performing a simple SQL join.
Output:
customer_id | review | review_type
-------------+----------------------------------------------------------------------+-------------
4 | Not bad, but the screen is a bit small. | neutral
10 | Disappointing tablet, slow performance and battery drains quickly. | negative
6 | Decent phone, but the screen is not as good as I expected. | neutral
2 | Good laptop, but the battery life could be better. | neutral
9 | Satisfactory tablet, but the screen resolution could be better. | neutral
3 | This is the worst laptop I have ever used. | negative
5 | Excellent phone, great camera and battery life. | positive
7 | Poor phone, battery life is terrible and camera quality is not good. | negative
1 | Great laptop, very fast and reliable. | positive
8 | Awesome tablet, very fast and responsive. | positive
(10 rows)
Great! We have successfully classified the product reviews by type by using the openai_chat_complete
function by pgai.
Automating Data Classification Task Using a Trigger
Next, we’ll create a trigger that automates the data classification task. To do this, we first need to encapsulate the SQL command for data classification into a PostgreSQL function, which will be called by the trigger.
Step 1: Encapsulate the data classification task in a function
This will create the function classify_and_insert_review()
.
Step 2: Create the trigger
Next, let’s create a trigger that will call the above function whenever a new row is inserted into the product_reviews
table.
Once this trigger is created, every time a new row is inserted into the product_reviews
table, the function classify_and_insert_review()
will be executed.
Next Steps
In this tutorial, we performed a simple classification task demonstrating how to use automatic data classification in PostgreSQL using OpenAI and pgai. We used the pgai extension to interact with the OpenAI API and classify product reviews into positive, negative, or neutral categories. We then created a trigger to automate the classification.
To start automating the classification of your data using PostgreSQL and OpenAI, check out Timescale Cloud's AI stack. It includes a complete suite of open-source PostgreSQL extensions—pgvector, pgai, pgai Vectorizer, and pgvectorscale—that will streamline your AI workflows without the overhead of managing multiple systems. Build smarter, more efficient AI solutions today using a free PostgreSQL database on Timescale Cloud.
If you want to build locally, you will find installation instructions for pgai, pgai Vectorizer, and pgvectorscale in the respective repositories (GitHub ⭐s welcome!). Here’s an example tutorial using Llama 3 and Ollama to guide you along the way.