We recently came across a very impressive article in Tech Target by Nick Martin, a leading tech writer in the GPU database space. In the article, Nick spoke about how GPU accelerated computing is making its way into enterprise data centers, fueling machine learning and artificial intelligence initiatives.
He was spot on in identifying the biggest problem with conventional CPU based systems – that when you run a query some of the processes can take an entire night. Meaning at best you can ask only one question a day, at worst you wake up in the morning to realize there was an error in the query, and you must start the analysis all over again. That is exactly the problem that a GPU database can solve.
So why aren’t most companies today immediately jumping to a GPU accelerated computing database?
Nick mentioned, “With systems as large, complex and expensive as enterprise database platforms, there’s an understandable hesitancy to rip and replace.”
The key, then, is that with their large existing database infrastructure they are often unwilling to rip and replace.
Most companies have spent a considerable amount of blood, sweat and tears to bring their current tech investments up to date. It’s understandable that they don’t want to spend more money, tech effort and years to move to a new database when new technology is released every year.
Since Brytlyt was built with an understanding of that problem in mind, Brytlyt’s GPU accelerated database can augment and accelerate these existing investments and improve ROI without a technology overhaul. We can do this simply because we chose to build our product on PostgreSQL and because of that, through a single line of code, any company today can choose to have their databases run on a GPU database and never need to deal with overnight queries.