3 essentials for establishing robust data infrastructure

 |  16 October 2022

20221005-Data-Infrastructure-Part-2-v2
blue gradient waves

We recently talked about the challenges energy companies face in establishing strong data infrastructure and how the sector is dominated by legacy systems. 

It’s easy to see why. 

In an industry that builds dams and nuclear stations to last, unchanged (and therefore usable), for 50–100 years, it is understandable to see how the underlying mentality of keeping with the old has been established. But data infrastructure is relatively ephemeral—it needs to constantly change to remain relevant (and therefore usable).

While it is not unheard of for a legacy system to languish for some twenty-odd years, at some point it will need to be replaced. In this era of data, cloud, and digital technology, maintaining robust data systems through perpetual change is a must-have, so today’s musings are about what to consider when setting up your data infrastructure.

1. It’s a lot more flexible in the Cloud

If there’s one thing for certain, the old approach—inevitable at the time—of being locked into individual vendors with proprietary technology is no longer viable. It is now entirely feasible to expect that whenever a new tool is implemented, anything to do with the infrastructure and even the consumption of data will be future-compatible and interoperable. 

The modularity of modern technology means you can reasonably tailor an end-to-end solution by separately purchasing the components that best suit your requirements. While this may be a more complex task upfront, there is merit in being able to configure a solution using different vendors. If that is too onerous, Cloud providers (AWS, Google Cloud, Microsoft Azure) offer what we call ‘walled gardens’ in tech, which provide everything you need to get started. While this does favour a single vendor over an ecosystem, you are not exactly locked in as these tools are largely interchangeable. 

Here’s an example. 

An energy company builds (or has built) a machine-learning tool inside one of these walled gardens.  Instead of using vendor-specific components, the code is kept generic. This means that if the company decides to change Cloud providers, they can fairly easily transfer the code into the new provider’s environment. If you keep your code and tools generic, porting to a new provider is entirely possible—after all, the cloud is just someone else’s computer! 

2. Make sure you have solid data infrastructure

There is a catch in all this. It doesn’t matter how good a vendor is, or how advanced their tech is, you won’t find an off-the-shelf solution that can work with insufficient or non-existent data infrastructure. 

We’ve seen first-hand and heard accounts of situations where solutions were acquired, consultants engaged and millions of dollars spent, only to find out down the track there wasn’t millions of dollars worth of value realised, all because the data infrastructure was substandard. You can’t expect a sophisticated tool to perform and add value if it’s sitting on top of a swamp. The old Monty Python adage “I built a castle… It sank into the swamp, so I built a second one” is no basis for a system of data governance. 

3. Let the specialists do specialised work

Shifting to a modern data practice, one underpinned by robust data infrastructure, is a long journey. It needs commitment across an organisation to move forward in this way, and to bring in the right people, processes, and technology. And while each of these three things have a critical role in digital transformation, we'd would argue that having the right people in the right roles is paramount. 

Ten or so years ago, when digital transformation was still a new buzzword, it was hard to find specialist with a great deal of experience in this area—those skills were still being developed in the workforce. But data has become a much more mature business endeavour and beginning to specialise into a few disciplines, e.g., data science (which is a fairly generic term, but that’s a whole other conversation), data engineering (which is more specific), and machine learning engineering (which is becoming well defined within software and data).

I may be biased as practitioners and strategists, but our advice would always be to bring in the people who know how to do things you don’t know how to do, and give them plenty of runway. And in the interest of being a cliche by including a Steve Jobs quote: 

“It doesn’t make sense to hire smart people and then tell them what to do. We hire smart people so they can tell us what to do.” 

If we put this into the context of energy companies, let’s say an energy provider who has only ever run gas power plants wants to move into renewables and build a wind farm. Obviously there will be a range of new skills required to make this happen, for example, specialists to advise on location and configuration. Things can go awry when these specialists are not given the freedom to undertake the project in the way they need to, because their activities may be so foreign to the company hiring them. Understandably, if you’re running a successful company and suddenly someone comes in suggesting something completely different, things can get incredibly uncomfortable. 

The great thing about hiring specialists (and letting them do their thing) is that by exposing your company to these skills and sharing the knowledge, you’ll upskill your internal team much faster and develop expertise that can be called upon in-house. Isn’t that a great win for everyone? 

If your business is operating legacy systems and you need some advice on how to modernise your infrastructure to be fit for purpose and keep pace with the industry, get in touch with the team at Flux.

 

Continue learning how we can help you

purple gradient waves

Be the first to know!

Sign up for the latest updates in technology, changes, regulations, and new energy products from Flux.