In-memory processing has started to become the norm in large-scale data handling. This is aclose to the metal analysis of highly important but often neglected aspects of memory access times and how it impacts big data and NoSQL technologies.
We cover aspects such as the TLB, the Transparent Huge Pages, the QPI Link, Hyperthreading and the impact of virtualization on high-memory footprint applications. We present benchmarks of various technologies ranging from Cloudera’s Impala to Couchbase and how they are impacted by the underlying hardware.
The key takeaway for the presentation bellow is a better understanding of how to size a cluster, how to choose a cloud provider and an instance type for big data and NoSQL workloads and why not every core or GB of RAM is created equal.
The rise of virtualization has been a boon for enterprises large and small, but that doesn’t mean the transition from more predictable storage methods has been easy.
“Virtualization is changing the IT infrastructure landscape and creating huge problems for customers using traditional storage,” says Doug Rich, EMEA Vice President of Tintri, a California-based IT company that’s been at the forefront of the race to virtualization. “Organizations come to Tintri looking to tackle performance problems and dramatically reduce their storage management time.”
Here, Doug discusses Tintri, the challenges facing IT, Big Data and the trends we should all be following. Read on: Continue Reading