Skip to main content

Hadoop server performance tuning

Listen:

Tuning a Hadoop cluster from a DevOps perspective requires an understanding of kernel and Linux principles. The following article describes the most important parameters along with tricks for optimal tuning.

Memory

Typically, modern Linux systems (Linux 2.6+) use swapping to avoid OOM (out of memory) to protect the system from kernel freezes. But Hadoop uses Java, and typically Java is configured with MAXHEAPSIZE per service (HDFS, HBase, Zookeeper, etc.). The configuration must match the memory available in the system. A common formula for MapReduce v1:

TOTAL_MEMORY = (Mappers + Reducers) * CHILD_TASK_HEAP + TT_HEAP + DN_HEAP + RS_HEAP + OTHER_SERVICES_HEAP + 3GB (for OS and caches)

For MapReduce v2 YARN takes care about the resources, but only for services which are running as YARN Applications. [1], [2]

Disable swappiness
echo 0 > /proc/sys/vm/swappiness

and persist after reboots via sysctl.conf:
echo “vm.swappiness = 0” >> /etc/sysctl.conf

In addition, RedHat has implemented THP (transparent huge pages swapping) in the 2.6.39 kernel. THP reduces I/O by up to 30%. It's highly recommended to disable THP.

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag


To do this automatically at boot time, I used /etc/rc.local:
if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/redhat_transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
fi

Another nice tuning trick is to use vm.overcommit_memory. This switch allows overcommitting of virtual memory. Most of the time, virtual memory is sparse arrays with zero pages - as Java does when allocating memory for a VM. In most cases, these pages contain no data, and the allocated memory can be reused (overcommitted) by other pages. With this switch, the operating system knows that there is always enough memory available to save the virtual pages.

This feature can be configured at runtime per:

sysctl -w vm.overcommit_memory = 1
sysctl -w vm.overcommit_ratio = 50


and permanently per /etc/sysctl.conf.

Network / Sockets

On heavily used and large Linux-based clusters, the default sockets and network configuration can slow down some operations. This section covers some of the possibilities I have found over the years. But please be aware of the usage, as this affects network communication.

First of all, enable the whole available port range of max available sockets:

sysctl -w net.ipv4.ip_local_port_range = 1024 65535

In addition, increasing the recycling time of sockets avoids large TIME_WAIT queues. Reusing sockets for new connections can speed up network communication. It's to be used with care, and depends heavily on the network stack and the jobs running in your cluster. Performance can increase, but it can also decrease dramatically as we now quickly recycle connections in the WAIT state. Typically used in high ingest clusters like HBase or Storm, Kafka, etc.

sysctl -w net.ipv4.tcp_tw_recycle = 1

sysctl -w net.ipv4.tcp_tw_reuse = 1

For the same reason, network buffers can become backlogged. In this case, new connections may be dropped or deleted, causing performance problems. Increasing the backlog to 16MB / socket, along with the number of outstanding syn requests and backlog sockets, is usually sufficient.

sysctl -w net.core.rmem_max = 16777216
sysctl -w net.core.wmem_max = 16777216
sysctl -w net.ipv4.tcp_max_syn_backlog = 4096
sysctl -w net.ipv4.tcp_syncookies = 1
sysctl -w net.core.somaxconn = 1024

=> Remember, this is not a general-purpose tuning trick. On general-purpose clusters, playing around with the network stack is not safe at all.  

Disk / Filesystem / File descriptors

Linux tracks file access time, and that means a lot more disk seeks. But HDFS writes once, reads many times, and the name node tracks the time. Hadoop doesn't need to track access time at the OS level, it's safe to disable this per disk per mount options.

/dev/sdc /data01 ext3 defaults,noatime 0 

Eliminate root reserved space on partitions. The nature of EXT3/4 reserves 5% of each disk for root. This means that systems will have a lot of unused space. Disable root reserved space on Hadoop disk:

mkfs.ext3 –m 0 /dev/sdc

If the disk is already mounted this can be done forever per 

tune2fs -m 0 /dev/sdc

An optimal server has one HDFS mount point per disk and one or two dedicated disks for logs and the operating system.

File handler and processes

Typically, a Linux system has very conservative file handlers configured. Most of the time, these handlers are sufficient for small application servers, but not for Hadoop. If the file handlers are too small, Hadoop will throw a java.io.FileNotFoundException: Too many open files - to avoid this, raise the limits:

echo hdfs – nofile 32768 >> /etc/security/limits.conf
echo mapred – nofile 32768 >> /etc/security/limits.conf

In addition, the maximum number of processes:
echo hbase – nofile 32768 >> /etc/security/limits.conf
echo hdfs – nproc 32768 >> /etc/security/limits.conf
echo mapred – nproc 32768 >> /etc/security/limits.conf
echo hbase – nproc 32768 >> /etc/security/limits.conf

DNS / Name resolution

Communication in the Hadoop ecosystem is highly dependent on proper DNS resolution. Typically, name resolution is configured via /etc/hosts. It is important that the canonical name is the FQDN of the server, see the example.

1.1.1.1 one.one.org one namenode
1.1.1.2 two.one.og two datanode

If DNS is used, the system hostname must match the FQDN in both forward and reverse name resolution; to reduce DNS lookup latency, use the name service caching daemon (nscd), but do not cache passwd, group, or netbios information.

There are also many specific tuning tricks within the Hadoop ecosystem that will be discussed in one of the following articles.

Comments

  1. Thank you for this great article! I will try these on my environment. +1

    ReplyDelete

Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Beyond Ctrl+F - Use LLM's For PDF Analysis

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer. Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag. That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing P...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...