본문으로 바로가기
반응형

RHOSP13에서 Overcloud 노드가 많아지다보니 배포시 여러 에러가 발생했다.

약 150 ~ 200대 사이였던 걸로 기억한다.

 

RedHat 공식 문서는 아래 URL을 참고

https://www.redhat.com/en/blog/scaling-red-hat-openstack-platform-more-500-overcloud-nodes

 

Scaling Red Hat OpenStack Platform to more than 500 Overcloud Nodes

At Red Hat, performance and scale are treated as first class citizens and a lot of time and effort are put into making sure our products scale. We have a dedicated team of performance and scale engineers that work closely with product management, developer

www.redhat.com

 

1. config 변경 값은 아래와 같다.

director (undercloud)에 아래와 같이 적용한 후 openstack undercloud upgrade를 진행하면 다시 초기값으로 돌아간다.

따라서 일시적으로는 아래 내용을 적용하면 되고, 업그레이드를 진행해도 적용하고 싶을 경우 아래 "2. hieradata.yaml 적용시" 내용처럼 hieradata.yaml에 값을 넣어 적용한 후 openstack undercloud upgrade를 진행하면 된다.

###keystone
/etc/keystone/keystone.conf
 We raised the number of Keystone admin workers to 32 and main workers to 24
기본값은 Director 노드에 할당된 CPU의 절반 수가 설정됨
[root@rhosp-director ~]# vi /etc/keystone/keystone.conf
admin_workers=32
public_workers=24
 

/etc/httpd/conf.d/10-keystone_wsgi_admin.conf
 process수를 32로 변경
 
/etc/httpd/conf.d/10-keystone_wsgi_main.conf
process수를 24로 변경

[root@rhosp-director ~]# vi /etc/httpd/conf.d/10-keystone_wsgi_admin.conf
  WSGIDaemonProcess keystone_admin display-name=keystone-admin group=keystone processes=32 threads=1 user=keystone

[root@rhosp-director ~]# vi /etc/httpd/conf.d/10-keystone_wsgi_main.conf
  WSGIDaemonProcess keystone_main display-name=keystone-main group=keystone processes=24 threads=1 user=keystone
"Keystone processes do not take a substantial amount of memory,  so it is safe to increase the process count. Even with 32 processes of admin workers, keystone admin takes around 3-4 GB of memory and with 24 processes, Keystone main takes around 2-3 GB of RSS memory."
 

  We also had to enable caching with memcached to improving Keystone performance.(memcached를 cache로 사용하도록 설정)
[root@rhosp-director ~]# vi /etc/keystone/keystone.conf
[cache]
enabled = true
backend = dogpile.cache.memcached
notification driver를 noop으로 설정
[root@rhosp-director ~]# vi /etc/keystone/keystone.conf
[oslo_messaging_notifications]
driver=noop


###Heat
/etc/heat/heat.conf
num_engine_workers=48
executor_thread_pool_size = 48
rpc_response_timeout=1200
[root@rhosp-director ~]# vi /etc/heat/heat.conf
num_engine_workers=48
executor_thread_pool_size = 48
rpc_response_timeout=1200
enable caching in /etc/heat/heat.conf
[root@rhosp-director ~]# vi /etc/heat/heat.conf
[cache]
backend = dogpile.cache.memcached
enabled = true
memcache_servers = 127.0.0.1


###MySQL
/etc/my.cnf.d/galera.cnf
[root@rhosp-director ~]# vi /etc/my.cnf.d/galera.cnf
[mysqld]
innodb_buffer_pool_size = 5G


###Neutron
/etc/neutron/neutron.conf
[root@rhosp-director ~]# vi /etc/neutron/neutron.conf
notification_driver=noop


###Ironic
/etc/ironic/ironic.conf
ironic-conductor의 CPU 사용량을 감소시켜주는 효과
[root@rhosp-director ~]# vi /etc/ironic/ironic.conf
sync_power_state_interval = 180


###Mistral
/etc/mistral/mistral.conf
execution_field_size_limit_kb 값을 증가시켜야 함. (정해진 값이 있는것은 아닌것 같고 환경에 따라 증가시켜야 할듯)
[root@rhosp-director ~]# vi /etc/mistral/mistral.conf
[DEFAULT]
rpc_response_timeout=600

[engine]
execution_field_size_limit_kb=32768


###Nova
/etc/nova/nova.conf
[root@rhosp-director ~]# vi /etc/nova/nova.conf
[oslo_messaging_notifications]
driver=noop

 

 

2. hieradata.yaml 적용시

# hiedata.yaml 적용 방법
vi /home/stack/undercloud.conf

hieradata_override = /home/stack/templates/etc/hieradata.yaml     <- 추가


# hieradata.yaml 
vi /home/stack/template/etc/hiedata.yaml

nova::compute::ironic::max_concurrent_builds: 20
nova::rpc_response_timeout: 600
nova::wsgi::apache::workers: 8
nova::wsgi::apache::threads: 1
mysql_max_connections: 8192
tripleo::profile::base::database::mysql::mysql_server_options:
  'mysqld':
    innodb_buffer_pool_instances: 4
    innodb_buffer_pool_size: '5G'
    tmp_table_size: '128M'
    bind-address: "%{hiera('controller_host')}"
    innodb_file_per_table: 'ON'
    max_allowed_packet: '64M'
    connect_timeout: '60'
heat::rpc_response_timeout: 1200
heat::max_json_body_size: 8388608
heat::config::heat_config:
    DEFAULT/executor_thread_pool_size:
        value: 48
heat::engine::num_engine_workers: 48
heat::yaql_memory_quota: 100000
heat::yaql_limit_iterators: 10000
ironic::rpc_response_timeout: 600
ironic::config::ironic_config:
    DEFAULT/executor_thread_pool_size:
        value: 20
ironic::config::ironic_config:
    DEFAULT/sync_power_state_internal:
        value: 180
keystone::wsgi::apache::workers: 32
keystone::wsgi::apache::threads: 1
mistral::engine::execution_field_size_limit_kb: 65536
mistral::rpc_response_timeout: 600
zaqar::max_messages_post_size: 3145728
zaqar::config::zaqar_config:
  oslo_messaging_kafka/producer_batch_size:
    value: 65536

 

 

728x90