I’ve been using this since early this year and it’s been great. It was what convinced me to just stick to Postgres rather than using a dedicated vector db.
Only working with 100m or so vectors, but for that it does the job.
The biggest selling point to using Postgres over qdrant or whatever is that you can put all the data in the same db and use joins and ctes, foreign keys and other constraints, lower latency, get rid of effectively n+1 cases, and ensure data integrity.
I generally agree that one database instance is ideal, but there are other reasons why Postgres everywhere is advantageous, even across multiple instances:
- Expertise: it's just SQL for the most part
- Ecosystem: same ORM, same connection pooler
- Portability: all major clouds have managed Postgres
I'd gladly take multiple Postgres instances even if I lose cross-database joins.
Yep. If performance becomes a concern, but we still want to exploit joins etc, it's easy to set up replicas and "shard" read only use cases across replicas.
Depends on the query and I don’t have exact numbers of the top of my head, but we’re talking low 100ms range for something pgvector itself wasn’t able to handle in a reasonable amount of time.
Yes, RDS seems to really hold PG back on AWS, with all the interesting pg extensions getting released now (pg_lake). It is a share I can't move to other PG vendors because it is a pain in the ass to get all privacy, legal docs in order.
Technically, is there a reason AWS can't support allowing sophisticated users to run arbitrary extensions in RDS? The control-plane/data-plane boundaries should be robust enough that it's not going to allow an RDS extension to "hack AWS". Worst case is that AWS would have to account for the possibility of a crash backoff loop in RDS.
I understand that practically you can b0rk an install with a bunch of poorly configured extensions, and you can easily install something that hoovers up all your data and sends it to North Korea. But if I understand those risks and can mitigate them, why not allow RDS to load up extension binaries from an S3 bucket and call it a day?
If AWS wanted to broaden the available market, this would be an opportunity to leverage partners and the AWS marketplace mechanisms: Instead of AWS vouching for the extensions, allow partners to sell support in a marketplace. AWS has clean hands for the "My RDS instance crashed and wiped out my market cap" risk, but they can still wet their beak on the money flowing through to vendors. Meanwhile, vendors don't have to take full responsibility for the entire stack and mess with PrivateLink etc. Top tier vendors would also perform all the SOC attestation so that RDS doesn't lose out.
P.S. Andy, if you're reading this you should call me.
I'm considering hosting a separate pg db just to be able to access certain extensions. I am interested in this extension as well as https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc... (also not available on RDS). Then use logical replication for specific data source tables (guess it would need to be DMS).
Worth noting that the filtering implementation is quite restrictive if you want to avoid post-filtering: filters must be expressible as discrete smallints (ruling out continuous variables like timestamps or high cardinality filters like ids); filters must always be denormalized onto the table you're indexing (no filtering on attributes of parent documents, for example); and filters must be declared at index creation time (lots of time spent on expensive index builds if you want to add filters). Personally I would consider these caveats pretty big deal-breakers if the intent is scale and you do a lot of filtering.
Combined with our other search extension for full text search these two extensions make postgres a really capable hybrid search engine: https://github.com/timescale/pg_textsearch
does this actually fix metadata filtering during vector search? that's the thing that kills performance in pgvector. weaviate had the same problem, ended up using qdrant instead
We have a lot of happy customers that moved from rds to tiger cloud if you think pgvectorscale is interesting to you and you don't want to self host pg.
But yes big cloud providers move slow in adopting extensions.
Only working with 100m or so vectors, but for that it does the job.
- Expertise: it's just SQL for the most part - Ecosystem: same ORM, same connection pooler - Portability: all major clouds have managed Postgres
I'd gladly take multiple Postgres instances even if I lose cross-database joins.
https://www.postgresql.org/docs/current/postgres-fdw.html
I understand that practically you can b0rk an install with a bunch of poorly configured extensions, and you can easily install something that hoovers up all your data and sends it to North Korea. But if I understand those risks and can mitigate them, why not allow RDS to load up extension binaries from an S3 bucket and call it a day?
If AWS wanted to broaden the available market, this would be an opportunity to leverage partners and the AWS marketplace mechanisms: Instead of AWS vouching for the extensions, allow partners to sell support in a marketplace. AWS has clean hands for the "My RDS instance crashed and wiped out my market cap" risk, but they can still wet their beak on the money flowing through to vendors. Meanwhile, vendors don't have to take full responsibility for the entire stack and mess with PrivateLink etc. Top tier vendors would also perform all the SOC attestation so that RDS doesn't lose out.
P.S. Andy, if you're reading this you should call me.
Essentially you combine the pgvector score and the bm25 score to hopefully get better results.
https://www.tigerdata.com/blog/pgvector-vs-pinecone
But yes big cloud providers move slow in adopting extensions.
https://github.com/timescale/pgvectorscale/issues/113