Editorials

Can You Pick and Choose Database Platform by Discreet Function?

I’ve talked about, written about, implemented and presented about the fact that with so many platforms and options “out there” and so many implementations, it’s more complex now to select the platform for a given application. Sure, you can standardize, or you can pick generally the same platform and then allow for exceptions. But really, even within platforms, you have choices – software or platforms as a service, on-premise, cloud, all of that.

Even if you standardize, on, say, SQL Server, the options are still before you. Hosting, managed, cloud, hybrid solutions, all of that still comes into play. It might be a dismisal of some options, but as departments and projects deploy, it’s quite possible that the environment will start to see the different incarnations, even if you hold an iron grip on platform selection.

But is there another consideration?

Are there cases where just choosing a platform for an application or project just isn’t enough? Are there cases where you’re going to be faced with specific *functionality* requirements that you will be supporting and there are different platforms that offer better and worse options? Specifically, are there cases where a single solution might use database functionality from one solution, another and yet another – and pull it all together for the end product?

I bring this up because there are some clear functional points where one platform out-performs others. Simple illustrations include text processing and big data parsing on things like Hadoop and others. These types of mixed solutions, where you may process some information on that other platform and then consolidate or use information in your SQL Server are already happening.

How granular makes sense? It seems to me that if you go too deep in the “this function here, that function there…” type of solution, maintenance can be a nightmare. Not only will code and language elements be different, but the location of the data, security and even just the core function of keeping the tools updated all become exponentially larger issues to work through.

To play devil’s advocate though, I could see cases where these types of split environments make sense. We have it in our own systems, not necessarily at the function level, but different processing is done different ways, and our own core systems are split across three different providers. It wasn’t fun to set up. But our maintenance and functionality and such have dramatically improved because we have rock solid support at each of those providers.

When is too much granularity just…. too much? Where do you draw the line – and perhaps more importantly, are we actually ABLE to draw the line, or is ours (the job of data platform folks) the job of making it all work as requested and required from our stake holders?