7 comments

  • ricw 1 day ago
    I’ve been using this since early this year and it’s been great. It was what convinced me to just stick to Postgres rather than using a dedicated vector db.

    Only working with 100m or so vectors, but for that it does the job.

    • pqdbr 1 day ago
      Are you using a dedicated pg instance for vector or you keep all your data in a single pg instance (vector and non-vector)?
      • ComputerGuru 1 day ago
        The biggest selling point to using Postgres over qdrant or whatever is that you can put all the data in the same db and use joins and ctes, foreign keys and other constraints, lower latency, get rid of effectively n+1 cases, and ensure data integrity.
        • dalberto 1 day ago
          I generally agree that one database instance is ideal, but there are other reasons why Postgres everywhere is advantageous, even across multiple instances:

          - Expertise: it's just SQL for the most part - Ecosystem: same ORM, same connection pooler - Portability: all major clouds have managed Postgres

          I'd gladly take multiple Postgres instances even if I lose cross-database joins.

      • ricw 1 day ago
        All in one of course. That’s the biggest advantage. And why postgres is great - it covers virtually all standard use cases.
    • esafak 1 day ago
      What kind of performance do you observe with what setup?
      • ricw 1 day ago
        Depends on the query and I don’t have exact numbers of the top of my head, but we’re talking low 100ms range for something pgvector itself wasn’t able to handle in a reasonable amount of time.
  • aunty_helen 1 day ago
    Related discussion for pgvector perf: https://news.ycombinator.com/item?id=45798479
    • tacoooooooo 1 day ago
      the main issue with pgvectorscale is that it's not available in RDS :(
      • omg2864 1 day ago
        Yes, RDS seems to really hold PG back on AWS, with all the interesting pg extensions getting released now (pg_lake). It is a share I can't move to other PG vendors because it is a pain in the ass to get all privacy, legal docs in order.
        • coredog64 17 hours ago
          Technically, is there a reason AWS can't support allowing sophisticated users to run arbitrary extensions in RDS? The control-plane/data-plane boundaries should be robust enough that it's not going to allow an RDS extension to "hack AWS". Worst case is that AWS would have to account for the possibility of a crash backoff loop in RDS.

          I understand that practically you can b0rk an install with a bunch of poorly configured extensions, and you can easily install something that hoovers up all your data and sends it to North Korea. But if I understand those risks and can mitigate them, why not allow RDS to load up extension binaries from an S3 bucket and call it a day?

          If AWS wanted to broaden the available market, this would be an opportunity to leverage partners and the AWS marketplace mechanisms: Instead of AWS vouching for the extensions, allow partners to sell support in a marketplace. AWS has clean hands for the "My RDS instance crashed and wiped out my market cap" risk, but they can still wet their beak on the money flowing through to vendors. Meanwhile, vendors don't have to take full responsibility for the entire stack and mess with PrivateLink etc. Top tier vendors would also perform all the SOC attestation so that RDS doesn't lose out.

          P.S. Andy, if you're reading this you should call me.

        • calderwoodra 1 day ago
          Yes, the InfoSec advantages of using RDS are very real, especially in B2B Enterprise SaaS.
      • mrinterweb 1 day ago
        I'm considering hosting a separate pg db just to be able to access certain extensions. I am interested in this extension as well as https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc... (also not available on RDS). Then use logical replication for specific data source tables (guess it would need to be DMS).
  • whakim 1 day ago
    Worth noting that the filtering implementation is quite restrictive if you want to avoid post-filtering: filters must be expressible as discrete smallints (ruling out continuous variables like timestamps or high cardinality filters like ids); filters must always be denormalized onto the table you're indexing (no filtering on attributes of parent documents, for example); and filters must be declared at index creation time (lots of time spent on expensive index builds if you want to add filters). Personally I would consider these caveats pretty big deal-breakers if the intent is scale and you do a lot of filtering.
  • jascha_eng 1 day ago
    Combined with our other search extension for full text search these two extensions make postgres a really capable hybrid search engine: https://github.com/timescale/pg_textsearch
  • isoprophlex 1 day ago
    The linked blogpost is an interesting read, too, comparing well-tuned pgvector to pinecone:

    https://www.tigerdata.com/blog/pgvector-vs-pinecone

  • dmarwicke 1 day ago
    does this actually fix metadata filtering during vector search? that's the thing that kills performance in pgvector. weaviate had the same problem, ended up using qdrant instead
  • mmmeff 1 day ago
    This is still unsupported in RDS, right?