4/29/2023 0 Comments Disk health checker![]() ![]() The minimum space that must be available on a disk for it to be used. This applies to yarn-nodemanager.local-dirs and -dirs. If the value is greater than or equal to 100, the NodeManager checks for a full disk. The maximum percentage of disk space utilization allowed after which a disk is marked as bad. Symlinked to mnt/var/log/hadoop-yarn/containers var/log/hadoop-yarn/containers, which is ThisĬorresponds to both -dirs (by default, Healthy for NodeManager to launch new containers. ![]() The minimum fraction of the number of disks that must be The frequency (in seconds) that the disk health checker runs. After making the change, you must restart hadoop-yarn-nodemanager as shown below. You can also connect to the Amazon EC2 instances associated with core nodes using SSH, and then add the values in /etc/hadoop/conf.empty/yarn-site.xml using a text editor. For more information see Configuring applications in the Amazon EMR Release Guide. You can set these values when you create a cluster using the yarn-site configuration classification. For example, you may want to increase the disk utilization threshold where a node reports UNHEALTHY by increasing the value of -disk-utilization-per-disk-percentage. The settings below can be adjusted according to your application requirements. For more information, see Monitor metrics with CloudWatch. You can set up a notification for this alarm to warn you of unhealthy nodes before the 45-minute timeout is reached. This metric reports the number of nodes reporting an UNHEALTHY status. Create an alarm for the MRUnhealthyNodes CloudWatch metric For more information, see Using termination protection. If you enable termination protection, be aware that Amazon EMR does not replace the Amazon EC2 instance with a new instance. This way, if a core node is deny listed, you can connect to the associated Amazon EC2 instance using SSH to troubleshoot and recover data. Enable termination protectionĮnable termination protection. For more information, see Scaling cluster resources. The new instances have the same storage configuration as other instances in the instance group. You can also add core instances to existing instance groups manually or by using auto-scaling. For more information, see Calculating the required HDFS capacity When you create a cluster, make sure that there are enough core nodes and that each has an adequate instance store and EBS storage volumes for HDFS. Best practices and recommendations Configure cluster hardware with adequate storage Each node subsequently goes UNHEALTHY in the same way, and the cluster eventually terminates. This increases the burden of disk utilization for remaining nodes because they begin to replicate HDFS data among themselves that they lost on the deny-listed node. The first node exceeds the disk utilization threshold, so Amazon EMR deny lists it. If a single node exceeds the disk utilization threshold because of HDFS, other nodes are likely to be near the threshold as well. When all Amazon EC2 instances associated with core nodes are marked for termination, the cluster terminates with the status NO_SLAVE_LEFT because there are no resources to execute jobs.Įxceeding disk utilization on one core node might lead to a chain reaction. If the node remains unhealthy for 45 minutes, Amazon EMR marks the associated Amazon EC2 instance for termination as FAILED_BY_MASTER. While it's in this state, Amazon EMR deny lists the node and does not allocate YARN containers to it. When disk utilization for a core node exceeds the utilization threshold, the YARN NodeManager health service reports the node as UNHEALTHY. Usually, this happens because termination protection is disabled, and all core nodes exceed disk storage capacity as specified by a maximum utilization threshold in the yarn-site configuration classification, which corresponds to the yarn-site.xml file.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |