Technically Speaking

The Official Bigstep Blog

 

Big data without high power computing just doesn’t make sense

There is more data in the world than ever before and it is growing faster than it ever has, with 90% of all the data in existence having been generated over the last two years. In business, organisations now have to deal with social media data, email, mobile and much more besides.Making sense of this can be a challenge. Yet some organisations choose to try and do without the required computing power – this is misguided at best and foolhardy at worst. 

There is more data in the world than ever before and it is growing faster than it ever has, with 90% of all the data in existence having been generated over the last two years. In business, organisations now have to deal with social media data, email, mobile and much more besides.

Making sense of this can be a challenge. Yet some organisations choose to try and do without the required computing power – this is misguided at best and foolhardy at worst. 

Tools for big data
When you are dealing with this sheer volume of data, attempting to analyse it to generate business insight to help gain competitive advantage, it stands to reason that you are going to need the right tools for the job.

Earlier this week, Facebook analytics chief Ken Rudin spoke about how companies shouldn’t get sucked into thinking that Hadoop is the only tool required for big data. He said ‘in reality, big data should include Hadoop and relational [databases] and any other technology that is suitable for the task at hand’.

He is right.

A big data infrastructure
One of the key elements required to process big data efficiently, is raw computing power. In a virtualised environment, this power is greatly reduced by the presence of a hypervisor. This is why our infrastructure comes without a hypervisor, allowing users to benefit from the full performance and power of bare metal.

There have been many industry benchmarks that support this. Nati Shalom, the CTO and founder of GigaSpaces has stated that ‘big data on a virtualised infrastructure would require 3x more resources than its Baremetal equivalent’.

That’s a huge differential and means we can deliver up to 80% more performance per resources than any virtual cloud.

Pay-per-hour big data crunching
Because we offer our infrastructure on a pay-per-hour basis, customers can also benefit from truly cost effective big data processing. We understand that many big data queries do not require such levels of computing power all of the time, so have no desire to see customers locked into lengthy contracts.

Such flexibility combined with bare metal power is a compelling prospect for businesses that are serious about big data. Attempting to use Hadoop (or indeed any other similar tools for big data) over a virtualised environment is akin to driving a luxury car in second gear. You won’t get the desired performance and whilst you may get there in the end, it will take you much longer than it should.

It’s a simple truth – to get the most from big data applications any organisations will need high power computing.

Got a question? Need advice? We're just one click away.
Sharing is caring:TwitterFacebookLinkedinPinterestEmail

Readers also enjoyed:

Big Data as a Service

What is Big Data after all?The term for Big Data is an umbrella term. It can have lots of meanings, just like the term Cloud has. In our opinion, there…

Don’t let big data be a big drain on resources

Big data and virtualisation are two of the dominant technology trends of the past few years. But can they work together effectively? Big data requires…

Leave a Reply

Your email address will not be published.

* Required fields to post your comments.
Please review our Privacy Notice in order to understand how we process your personal data and what are your rights in this respect.