Python Based Bootstrap Helper Functions
Python script based bootstrap helper functions exist to update Ambari configurations.
- Update the configuration of an Hadoop component in the cluster
- Update the configuration that's applicable to a particular host or group of hosts
- Obtain the configuration value and take actions based on the value
- Export configuration from a config type (for example, HDFS
core-site.xml
) - Add components such as Zookeeper, Namenode, and Resource Manager to the master nodes in an HA cluster.
- Remove components such as DataNode and NodeManager.
- You can run these helper functions only from predefined method in the Python script.
- Python 2.7 is the supported python version in the Python script.
- You must manage the Python script at your own risk because it can lead to microservice shut down.
To run a bootstrap script, see Running the Bootstrap Script.
For more information on Python bootstrap script helper functions, see:
Bootstrap Python Script Helper Functions
For more information on Python bootstrap script, see Python Based Bootstrap Helper Functions.
For Python based bootstrap helper function examples, see Python Script Examples.
To run a bootstrap script, see Running the Bootstrap Script.
Category | Helper Function | Functionality |
---|---|---|
Config Groups | createConfigGroup (service_name, hosts, config_group="Default") |
This helper function creates a config group for a service with a list of hosts. You must provide the service name, list of hosts, and config group name as input. |
removeConfigGroup (service_name, config_group="Default") |
This helper function removes the config group in a service. You must provide the service name and config group name as input. | |
addHostsInConfigGroup (service_name, hosts, config_group="Default" |
This helper function adds hosts to the config group in a service with a list of hosts. You must provide service name, list of hosts, and config group name as input. | |
removeHostsInConfigGroup (service_name, hosts, config_group="Default" |
This helper function removes hosts to the config group in a service with a list of hosts. You must provide service name, list of hosts, and config group name as input. | |
listConfigGroups (service_name) |
This helper function displays all the config groups in a service. You must provide service name. | |
getHostsInConfigGroup (service_name, config_group="Default" |
This helper function displays all the hosts in the config group in a service. You must provide config group and service name. | |
importConfigGroup (service_name, from_config_group, to_config_group) |
This helper function clones the configs from one config group to another config group in a service. You must provide from config group, to config group, and service name. | |
Config | updateConfig (service_name, config_type, config_properties, config_group="Default", config_meta={}) |
This helper function updates the config properties to the config_type file of config_group in a service. You must provide service name, config type, config properties, config group, and config meta. Note : config_properties is a map of config key and value pairs. config_meta is map of config key and config metadata. Possible type values in config meta are For example: |
removeConfig (service_name, config_type, config_name, config_group="Default") |
This helper function removes the config from the config type file of config group in a service. You must provide config group, config type, config name, and service name. For example: |
|
getConfig (service_name, config_type, config_name, config_group="Default") |
This helper function displays the config property value of config name in the config type file of a config group in a service. You must provide config group, config type, config name, and service name. | |
restartStaleConfig () |
This helper function restarts the services where configs are stale. | |
exportConfig (service_name, config_type, config_group="Default") |
This helper function displays all the configs in config type file of config group in a service. You must provide config group, config type, and service name. | |
Shell execution | runShellCommandOnAllNodes (command) |
This helper function executes shell command on all the cluster nodes. You must provide the command. |
runShellCommandOnNode (command, host) |
This helper function executes shell command on the requested host. you must provide the command and host. | |
Utility | getClusterName () |
Displays cluster name. |
getMasterNodesIps () |
Displays master nodes lps. | |
getWorkerNodesIps () |
Displays worker nodes lps. | |
getUtilityNodesIps () |
Displays utility node lps. | |
getQueryServerNodesIps () |
Displays query server nodes lps. | |
getComputeOnlyWorkerNodesIps () |
Displays compute only worker node lps. | |
getEdgeNodesIps () |
Displays Edge node lps. | |
getAllNodesIps () |
Displays all nodes lps. | |
getEventType () |
Displays the event type. Possible event type values are CreateBdsInstance , AddWorkerNodes , StartBdsInstance , ChangeShape , on-demand . |
|
getLogger () |
This helper function returns the logger instance which you can use to log info and error logs. For example, |
|
get_last_added_host_names() |
Returns Python list of hostnames added in last "Add Node" operation. Returns None, if no new hosts are added to the cluster. |
|
get_last_added_host_ips() |
Returns Python list of private IPs added in last "Add Node" operations. Returns None, if no new hosts are added to the cluster. |
|
executeAmbariFunc(method, path, payload=None, params=None, headers=None) |
This helper function runs Ambari REST API based method type, path, and payload. You must provide method, path, payload(If Any), params(If Any) and headers. Parameters:
|
|
Remote JMX | updateRemoteJmx (service_names=None, component_names_mapping=None, enable=True) |
This helper function enables or disables the remoteJMX metric.
|
getRemoteJmxInfo () |
This helper function returns:
|
|
Add components | add_component_to_host(request_dict) |
This function takes a Python dictionary and adds components to the hosts mentioned in the dictionary. Supported components are Example:
Caution: Support for NN and RM are only for HA clusters and to the hosts having default config groups. |
Remove components | remove_component_from_host(request_dict) |
This function takes a Python dictionary and removes components from the hosts mentioned in the dictionary with timeouts. Supported components are Example:
|
list of supported service names = [HDFS, YARN, MAPREDUCE2, TEZ, HIVE, OOZIE, ZOOKEEPER, AMBARI_METRICS, RANGER, HUE, KAFKA, KERBEROS, ODHUTILS, SPARK3, HBASE, TRINO]
list of supported event types = ["CreateBdsInstance", "AddWorkerNodes", "StartBdsInstance", "ChangeShape"] type = ["PASSWORD", "TEXT"]
Python Script Examples
For more information on the Python helper functions, see Bootstrap Python Script Helper Functions.
#!/usr/bin/env python2.7
def execute(helper):
# custom logger
logger = helper.getLogger()
logger.info('Testing Config Helper functions')
# Update config_properties in the Default config group of config_type file core-site in service HDFS
helper.updateConfig(service_name="HDFS", config_type="core-site", config_properties={"fs.trash.interval": "400"})
# Remove config property from Default config group of config_type core-site
helper.removeConfig(service_name="HDFS", config_type="core-site", config_name="fs.trash.interval")
# Get config value from Default config group of config_type file core-site in service HDFS
config_value = helper.getConfig(service_name="HDFS", config_type="core-site", config_name="fs.trash.interval")
# Export configs from Default config group of config_type file core-site in service HDFS
helper.exportConfig(service_name="HDFS", config_type="core-site")
# Restart stale config
helper.restartStaleConfig()
#!/usr/bin/env python2.7
def execute(helper):
logger = helper.getLogger()
logger.info("Custom logger logs are available in '/var/logs/oracle/bds/bootstrap/' directory of mn0 node")
logger.info("Execute get utility nodes ips")
utility_node_ips = helper.getUtilityNodesIps()
logger.info("Execute shell command on utility node")
helper.runShellCommandOnNode(command='pwd', host=utility_node_ips[0])
logger.info("Execute shell command for on-demand event type")
event_type = helper.getEventType()
if event_type == "on-demand":
helper.runShellCommandOnNode(command='ls', host=utility_node_ips[0])
logger.info("Create config group test in service HDFS")
helper.createConfigGroup(config_group="test", service_name="HDFS", hosts=[])
logger.info("Add Worker nodes as hosts to above created config group test in service HDFS")
worker_node_ips = helper.getWorkerNodesIps()
helper.addHostsInConfigGroup(service_name="HDFS", hosts=worker_node_ips, config_group="test")
logger.info("Update config_properties in the config group test of config_type file core-site in service HDFS")
helper.updateConfig(config_group="test", service_name="HDFS", config_type="core-site",
config_properties={"fs.trash.interval": "400"})
logger.info("Restart stale configs")
helper.restartStaleConfig()
#!/usr/bin/env python2.7
def execute(helper):
logger = helper.getLogger()
logger.info("Executing getWorkerNodesIps")
worker_node_ips = helper.getWorkerNodesIps()
logger.info("Executing createConfigGroup")
helper.createConfigGroup(service_name="HDFS", config_group="testConfigGroup",
hosts=worker_node_ips)
logger.info("Executing updateConfigGroup")
helper.updateConfig(config_group="test", service_name="HDFS", config_type="core-site",
config_properties={"fs.trash.interval": "1000"})
logger.info("Executing updateConfig")
helper.updateConfig(service_name="HDFS", config_type="core-site", config_properties=
{"fs.trash.interval": "1000", "test.password": "TestPassword"}, config_meta={"test.password":
{"type":"PASSWORD"}})
logger.info("Executing restartStaleConfig")
helper.restartStaleConfig()
#!/usr/bin/env python2.7
def execute(helper):
# custom logger
logger = helper.getLogger()
logger.info('Testing Config Helper functions')
# Update config_properties in the Default config group of config_type file core-site in service HDFS
hosts = helper.get_last_added_host_names()
request_dict = {
"NAMENODE": {
"hosts": hosts
},
"RESOURCEMANAGER": {
"hosts": hosts
}
}
helper.add_component_to_host(request_dict)
#!/usr/bin/env python2.7
def execute(helper):
# custom logger
logger = helper.getLogger()
logger.info('Testing Config Helper functions')
# Update config_properties in the Default config group of config_type file core-site in service HDFS
request_dict = {
"NAMENODE": {
"hosts": ["hostname1", "hostname2"]
},
"RESOURCEMANAGER": {
"hosts": ["hostname2", "hostname3"]
}
}
helper.add_component_to_host(request_dict)
#!/usr/bin/env python2.7
def execute(helper):
# custom logger
logger = helper.getLogger()
logger.info('Testing Config Helper functions')
# Update config_properties in the Default config group of config_type file core-site in service HDFS
request_dict = {
"DATANODE": {
"hosts": ["hostname1", "hostname2"],
"timeout_minutes": 40#(in minutes)
},
"NODEMANAGER": {
"hosts": ["hostname2", "hostname3"],
"timeout_minutes": 40#(in minutes)
}
}
helper.remove_component_from_host(request_dict)
#!/usr/bin/env python2.7
def execute(helper):
logger = helper.getLogger()
logger.info("Executing executeAmbariFunc")
response = helper.executeAmbariFunc(method='GET', path='clusters/<cluster_name>/services/')
logger.info("response : " + str(response))